Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.188
Filtrar
1.
Heliyon ; 10(19): e37745, 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39386823

RESUMO

Acute myeloid leukemia (AML) is a highly aggressive cancer form that affects myeloid cells, leading to the excessive growth of immature white blood cells (WBCs) in both bone marrow and peripheral blood. Timely AML detection is crucial for effective treatment and patient well-being. Currently, AML diagnosis relies on the manual recognition of immature WBCs through peripheral blood smear analysis, which is time-consuming, prone to errors, and subject to inter-observers' variation. This study aimed to develop a computer-aided diagnostic framework for AML, called "CAE-ResVGG FusionNet", that precisely identifies and classifies immature WBCs into their respective subtypes. The proposed framework leverages an integrated approach, by combining a convolutional autoencoder (CAE) with finely tuned adaptations of the VGG19 and ResNet50 architectures to extract features from CAE-derived embeddings. The process begins with a binary classification model distinguishing between mature and immature WBCs followed by a multiclassifier further classifying immature cells into four subtypes: myeloblasts, monoblasts, erythroblasts, and promyelocytes. The CAE-ResVGG FusionNet workflow comprises four primary stages, including data preprocessing, feature extraction, classification, and validation. The preprocessing phase involves applying data augmentation methods using geometric transformations and synthetic image generation using the CAE to address imbalance in the WBC distribution. Feature extraction involves image embedding and transfer learning, where CAE-derived image representations are used by a custom integrated model of VGG19 and ResNet50 pretrained models. The classification phase employs a weighted ensemble approach that leverages VGG19 and ResNet50, where the optimal weighting parameters are selected using a grid search. The model performance was assessed during the validation phase using the overall accuracy, precision, and sensitivity, while the area under the receiver characteristic curve (AUC) was used to evaluate the model's discriminatory capability. The proposed framework exhibited notable results, achieving an average accuracy of 99.9%, sensitivity of 91.7%, and precision of 98.8%. The model demonstrated exceptional discriminatory ability, as evidenced by an AUC of 99.6%. Significantly, the proposed system outperformed previous methods, indicating its superior diagnostic ability.

2.
Acta Crystallogr D Struct Biol ; 80(Pt 10): 744-764, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39361357

RESUMO

A group of three deep-learning tools, referred to collectively as CHiMP (Crystal Hits in My Plate), were created for analysis of micrographs of protein crystallization experiments at the Diamond Light Source (DLS) synchrotron, UK. The first tool, a classification network, assigns images into categories relating to experimental outcomes. The other two tools are networks that perform both object detection and instance segmentation, resulting in masks of individual crystals in the first case and masks of crystallization droplets in addition to crystals in the second case, allowing the positions and sizes of these entities to be recorded. The creation of these tools used transfer learning, where weights from a pre-trained deep-learning network were used as a starting point and repurposed by further training on a relatively small set of data. Two of the tools are now integrated at the VMXi macromolecular crystallography beamline at DLS, where they have the potential to absolve the need for any user input, both for monitoring crystallization experiments and for triggering in situ data collections. The third is being integrated into the XChem fragment-based drug-discovery screening platform, also at DLS, to allow the automatic targeting of acoustic compound dispensing into crystallization droplets.


Assuntos
Cristalização , Aprendizado Profundo , Cristalização/métodos , Cristalografia por Raios X/métodos , Proteínas/química , Processamento de Imagem Assistida por Computador/métodos , Síncrotrons , Automação , Software
3.
Front Bioeng Biotechnol ; 12: 1468738, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39359262

RESUMO

Droplet-based microfluidics techniques coupled to microscopy allow for the characterization of cells at the single-cell scale. However, such techniques generate substantial amounts of data and microscopy images that must be analyzed. Droplets on these images usually need to be classified depending on the number of cells they contain. This verification, when visually carried out by the experimenter image-per-image, is time-consuming and impractical for analysis of many assays or when an assay yields many putative droplets of interest. Machine learning models have already been developed to classify cell-containing droplets within microscopy images, but not in the context of assays in which non-cellular structures are present inside the droplet in addition to cells. Here we develop a deep learning model using the neural network ResNet-50 that can be applied to functional droplet-based microfluidic assays to classify droplets according to the number of cells they contain with >90% accuracy in a very short time. This model performs high accuracy classification of droplets containing both cells with non-cellular structures and cells alone and can accommodate several different cell types, for generalization to a broader array of droplet-based microfluidics applications.

4.
Sci Rep ; 14(1): 23879, 2024 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-39396096

RESUMO

Hyperspectral image (HSI) data has a wide range of valuable spectral information for numerous tasks. HSI data encounters challenges such as small training samples, scarcity, and redundant information. Researchers have introduced various research works to address these challenges. Convolution Neural Network (CNN) has gained significant success in the field of HSI classification. CNN's primary focus is to extract low-level features from HSI data, and it has a limited ability to detect long-range dependencies due to the confined filter size. In contrast, vision transformers exhibit great success in the HSI classification field due to the use of attention mechanisms to learn the long-range dependencies. As mentioned earlier, the primary issue with these models is that they require sufficient labeled training data. To address this challenge, we proposed a spectral-spatial feature extractor group attention transformer that consists of a multiscale feature extractor to extract low-level or shallow features. For high-level semantic feature extraction, we proposed a group attention mechanism. Our proposed model is evaluated using four publicly available HSI datasets, which are Indian Pines, Pavia University, Salinas, and the KSC dataset. Our proposed approach achieved the best classification results in terms of overall accuracy (OA), average accuracy (AA), and Kappa coefficient. As mentioned earlier, the proposed approach utilized only 5%, 1%, 1%, and 10% of the training samples from the publicly available four datasets.

5.
J Environ Manage ; 369: 122246, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39241598

RESUMO

Seagrass meadows are an essential part of the Great Barrier Reef ecosystem, providing various benefits such as filtering nutrients and sediment, serving as a nursery for fish and shellfish, and capturing atmospheric carbon as blue carbon. Understanding the phenotypic plasticity of seagrasses and their ability to acclimate their morphology in response to environ-mental stressors is crucial. Investigating these morphological changes can provide valuable insights into ecosystem health and inform conservation strategies aimed at mitigating seagrass decline. Measuring seagrass growth by measuring morphological parameters such as the length and width of leaves, rhizomes, and roots is essential. The manual process of measuring morphological parameters of seagrass can be time-consuming, inaccurate and costly, so researchers are exploring machine-learning techniques to automate the process. To automate this process, researchers have developed a machine learning model that utilizes image processing and artificial intelligence to measure morphological parameters from digital imagery. The study uses a deep learning model called YOLO-v6 to classify three distinct seagrass object types and determine their dimensions. The results suggest that the proposed model is highly effective, with an average recall of 97.5%, an average precision of 83.7%, and an average f1 score of 90.1%. The model code has been made publicly available on GitHub (https://github.com/sajalhalder/AI-ASMM).


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Ecossistema , Alismatales/anatomia & histologia , Alismatales/crescimento & desenvolvimento
6.
Sensors (Basel) ; 24(17)2024 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-39275723

RESUMO

This study presents the design and development of a high-resolution convex grating dispersion hyperspectral imaging system tailored for unmanned aerial vehicle (UAV) remote sensing applications. The system operates within a spectral range of 400 to 1000 nm, encompassing over 150 channels, and achieves an average spectral resolution of less than 4 nm. It features a field of view of 30°, a focal length of 20 mm, a compact volume of only 200 mm × 167 mm × 78 mm, and a total weight of less than 1.5 kg. Based on the design specifications, the system was meticulously adjusted, calibrated, and tested. Additionally, custom software for the hyperspectral system was independently developed to facilitate functions such as control parameter adjustments, real-time display, and data preprocessing of the hyperspectral camera. Subsequently, the prototype was integrated onto a drone for remote sensing observations of Spartina alterniflora at Yangkou Beach in Shouguang City, Shandong Province. Various algorithms were employed for data classification and comparison, with support vector machine (SVM) and neural network algorithms demonstrating superior classification accuracy. The experimental results indicate that the UAV-based hyperspectral imaging system exhibits high imaging quality, minimal distortion, excellent resolution, an expansive camera field of view, a broad detection range, high experimental efficiency, and remarkable capabilities for remote sensing detection.

7.
Foods ; 13(17)2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39272595

RESUMO

The variety and content of high-quality proteins in sunflower seeds are higher than those in other cereals. However, sunflower seeds can suffer from abnormalities, such as breakage and deformity, during planting and harvesting, which hinder the development of the sunflower seed industry. Traditional methods such as manual sensory and machine sorting are highly subjective and cannot detect the internal characteristics of sunflower seeds. The development of spectral imaging technology has facilitated the application of terahertz waves in the quality inspection of sunflower seeds, owing to its advantages of non-destructive penetration and fast imaging. This paper proposes a novel terahertz image classification model, MobileViT-E, which is trained and validated on a self-constructed dataset of sunflower seeds. The results show that the overall recognition accuracy of the proposed model can reach 96.30%, which is 4.85%, 3%, 7.84% and 1.86% higher than those of the ResNet-50, EfficientNeT, MobileOne and MobileViT models, respectively. At the same time, the performance indices such as the recognition accuracy, the recall and the F1-score values are also effectively improved. Therefore, the MobileViT-E model proposed in this study can improve the classification and identification of normal, damaged and deformed sunflower seeds, and provide technical support for the non-destructive detection of sunflower seed quality.

8.
Diagnostics (Basel) ; 14(17)2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39272664

RESUMO

Artificial intelligence (AI) is making notable advancements in the medical field, particularly in bone fracture detection. This systematic review compiles and assesses existing research on AI applications aimed at identifying bone fractures through medical imaging, encompassing studies from 2010 to 2023. It evaluates the performance of various AI models, such as convolutional neural networks (CNNs), in diagnosing bone fractures, highlighting their superior accuracy, sensitivity, and specificity compared to traditional diagnostic methods. Furthermore, the review explores the integration of advanced imaging techniques like 3D CT and MRI with AI algorithms, which has led to enhanced diagnostic accuracy and improved patient outcomes. The potential of Generative AI and Large Language Models (LLMs), such as OpenAI's GPT, to enhance diagnostic processes through synthetic data generation, comprehensive report creation, and clinical scenario simulation is also discussed. The review underscores the transformative impact of AI on diagnostic workflows and patient care, while also identifying research gaps and suggesting future research directions to enhance data quality, model robustness, and ethical considerations.

9.
BMC Med Imaging ; 24(1): 230, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223507

RESUMO

Breast cancer is a leading cause of mortality among women globally, necessitating precise classification of breast ultrasound images for early diagnosis and treatment. Traditional methods using CNN architectures such as VGG, ResNet, and DenseNet, though somewhat effective, often struggle with class imbalances and subtle texture variations, leading to reduced accuracy for minority classes such as malignant tumors. To address these issues, we propose a methodology that leverages EfficientNet-B7, a scalable CNN architecture, combined with advanced data augmentation techniques to enhance minority class representation and improve model robustness. Our approach involves fine-tuning EfficientNet-B7 on the BUSI dataset, implementing RandomHorizontalFlip, RandomRotation, and ColorJitter to balance the dataset and improve model robustness. The training process includes early stopping to prevent overfitting and optimize performance metrics. Additionally, we integrate Explainable AI (XAI) techniques, such as Grad-CAM, to enhance the interpretability and transparency of the model's predictions, providing visual and quantitative insights into the features and regions of ultrasound images influencing classification outcomes. Our model achieves a classification accuracy of 99.14%, significantly outperforming existing CNN-based approaches in breast ultrasound image classification. The incorporation of XAI techniques enhances our understanding of the model's decision-making process, thereby increasing its reliability and facilitating clinical adoption. This comprehensive framework offers a robust and interpretable tool for the early detection and diagnosis of breast cancer, advancing the capabilities of automated diagnostic systems and supporting clinical decision-making processes.


Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Inteligência Artificial
10.
Phys Eng Sci Med ; 2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39235668

RESUMO

In this paper, we proposed a complete study method to achieve accurate aortic dissection diagnosis at the patient level. Based on the CT angiography (CTA) images, a classification model named DAT-DenseNet, which combined the deep attention Transformer module with the DenseNet architecture is proposed. In the first phase, two DAT-DenseNet are combined in parallel. It is used to accurately achieve two classification task at the CTA images. In the second stage, we propose a feature fusion module. It concatenates and fuses the image features output from the two classification models on a patient by patient basis. In the comparison experiments of classification model performance, DAT-DenseNet obtained 92.41 % accuracy at the image level, which was 2.20 % higher than the commonly used model. In the comparison experiments of model fusion method, our method obtained 90.83 % accuracy at the patient level. The experiments showed that DAT-DenseNet model exhibits high performance at the image level. Our feature fusion module achieves the mapping from two classification image features to patient outcomes. It achieves accurate patient classification. The experiments' results in the Discussion section elaborate the details of the experiment and confirmed that the results were reliable.

11.
Data Brief ; 56: 110821, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39252785

RESUMO

Fruits are mature ovaries of flowering plants that are integral to human diets, providing essential nutrients such as vitamins, minerals, fiber and antioxidants that are crucial for health and disease prevention. Accurate classification and segmentation of fruits are crucial in the agricultural sector for enhancing the efficiency of sorting and quality control processes, which significantly benefit automated systems by reducing labor costs and improving product consistency. This paper introduces the "FruitSeg30_Segmentation Dataset & Mask Annotations", a novel dataset designed to advance the capability of deep learning models in fruit segmentation and classification. Comprising 1969 high-quality images across 30 distinct fruit classes, this dataset provides diverse visuals essential for a robust model. Utilizing a U-Net architecture, the model trained on this dataset achieved training accuracy of 94.72 %, validation accuracy of 92.57 %, precision of 94 %, recall of 91 %, f1-score of 92.5 %, IoU score of 86 %, and maximum dice score of 0.9472, demonstrating superior performance in segmentation tasks. The FruitSeg30 dataset fills a critical gap and sets new standards in dataset quality and diversity, enhancing agricultural technology and food industry applications.

12.
J Imaging Inform Med ; 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266912

RESUMO

PURPOSE: To develop a deep learning model for automated classification of orthopedic hardware on pelvic and hip radiographs, which can be clinically implemented to decrease radiologist workload and improve consistency among radiology reports. MATERIALS AND METHODS: Pelvic and hip radiographs from 4279 studies in 1073 patients were retrospectively obtained and reviewed by musculoskeletal radiologists. Two convolutional neural networks, EfficientNet-B4 and NFNet-F3, were trained to perform the image classification task into the following most represented categories: no hardware, total hip arthroplasty (THA), hemiarthroplasty, intramedullary nail, femoral neck cannulated screws, dynamic hip screw, lateral blade/plate, THA with additional femoral fixation, and post-infectious hip. Model performance was assessed on an independent test set of 851 studies from 262 patients and compared to individual performance of five subspecialty-trained radiologists using leave-one-out analysis against an aggregate gold standard label. RESULTS: For multiclass classification, the area under the receiver operating characteristic curve (AUC) for NFNet-F3 was 0.99 or greater for all classes, and EfficientNet-B4 0.99 or greater for all classes except post-infectious hip, with an AUC of 0.97. When compared with human observers, models achieved an accuracy of 97%, which is non-inferior to four out of five radiologists and outperformed one radiologist. Cohen's kappa coefficient for both models ranged from 0.96 to 0.97, indicating excellent inter-reader agreement. CONCLUSION: A deep learning model can be used to classify a range of orthopedic hip hardware with high accuracy and comparable performance to subspecialty-trained radiologists.

13.
Heliyon ; 10(17): e36754, 2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39286174

RESUMO

Corrosion is one of the key factors leading to material failure, which can occur in facilities and equipment closely related to people's lives, causing structural damage and thus affecting the safety of people's lives and property. To identify corrosion more effectively across multiple facilities and equipment, this paper utilizes a corrosion binary classification dataset containing various materials to develop a CNN classification model for better detection and distinction of material corrosion, using a methodological paradigm of transfer learning and fine-tuning. The proposed model implementation initially uses data augmentation to enhance the dataset and employs different sizes of EfficientNetV2 for training, evaluated using Confusion Matrix, ROC curve, and the values of Precision, Recall, and F1-score. To further enhance the testing results, this paper focuses on the impact of using the Global Average Pooling layer versus the Global Max Pooling layer, as well as the number of fine-tuning layers. The results show that the Global Average Pooling layer performs better, and EfficientNetV2B0 with a fine-tuning rate of 20 %, and EfficientNetV2S with a fine-tuning rate of 15 %, achieve the highest testing accuracy of 0.9176, an ROC-AUC value of 0.97, and Precision, Recall, and F1-Score values exceeding 0.9. These findings can be served as a reference for other corrosion classification models which uses EfficientNetV2.

14.
J Environ Manage ; 369: 122324, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39222586

RESUMO

Urban and suburban development frequently disturbs and compacts soils, reducing infiltration rates and fertility, posing challenges for post-development vegetation establishment, and contributing to soil erosion. This study investigated the effectiveness of compost incorporation in enhancing stormwater infiltration and vegetation establishment in urban landscapes. Experimental treatments comprised a split-split plot design of vegetation mix (grass, wildflowers, and grass-wildflowers) as main plot, ground cover (hydro-mulch and excelsior) as subplot, and compost (30% Compost and No-Compost) as sub-subplot factors. Wildflower inclusion was motivated by their recognized ecological benefits, including aesthetics, pollinator habitat, and deep root systems. Vegetation cover was assessed using RGB (Red-Green-Blue) imagery and ArcGIS-based supervised image classification. Over a 24-month period, bulk density, infiltration rate, soil penetration resistance, vegetation cover, and root mass density were assessed. Results highlighted that Compost treatments consistently reduced bulk density by 19-24%, lowered soil penetration resistance to under 2 MPa at both field-capacity and water-stressed conditions, and increased infiltration rate by 2-3 times compared to No-Compost treatments. Vegetation cover assessment revealed rapid establishment with 30% compost and 60:40 grass-wildflower mix, persisting for an initial 12 months. Subsequently, all treatments exhibited similar vegetation coverage from 13 to 24 months, reaching 95-100% cover. Compost treatments had significantly higher root mass density within the top 15 cm than No-Compost, but compost addition did not alter the root profile beyond the 15 cm depth incorporation depth. The findings suggest that incorporating 30% compost and including a wildflower or grass-wildflower mix appears to be effective in enhancing stormwater infiltration and provides rapid erosion control vegetation cover establishment in post-construction landscapes.


Assuntos
Compostagem , Solo , Compostagem/métodos , Erosão do Solo , Poaceae/crescimento & desenvolvimento , Ecossistema
15.
J Bone Oncol ; 48: 100629, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39257652

RESUMO

Objective: This study aims to explore the application of radiographic imaging and image recognition algorithms, particularly AlexNet and ResNet, in classifying malignancies for spinal bone tumors. Methods: We selected a cohort of 580 patients diagnosed with primary spinal osseous tumors who underwent treatment at our hospital between January 2016 and December 2023, whereby 1532 images (679 images of benign tumors, 853 images of malignant tumors) were extracted from this imaging dataset. Training and validation follow a ratio of 2:1. All patients underwent X-ray examinations as part of their diagnostic workup. This study employed convolutional neural networks (CNNs) to categorize spinal bone tumor images according to their malignancy. AlexNet and ResNet models were employed for this classification task. These models were fine-tuned through training, which involved the utilization of a database of bone tumor images representing different categories. Results: Through rigorous experimentation, the performance of AlexNet and ResNet in classifying spinal bone tumor malignancy was extensively evaluated. The models were subjected to an extensive dataset of bone tumor images, and the following results were observed. AlexNet: This model exhibited commendable efficiency during training, with each epoch taking an average of 3 s. Its classification accuracy was found to be approximately 95.6 %. ResNet: The ResNet model showed remarkable accuracy in image classification. After an extended training period, it achieved a striking 96.2 % accuracy rate, signifying its proficiency in distinguishing the malignancy of spinal bone tumors. However, these results illustrate the clear advantage of AlexNet in terms of proficiency despite a lower classification accuracy. The robust performance of the ResNet model is auspicious when accuracy is more favored in the context of diagnosing spinal bone tumor malignancy, albeit at the cost of longer training times, with each epoch taking an average of 32 s. Conclusion: Integrating deep learning and CNN-based image recognition technology offers a promising solution for qualitatively classifying bone tumors. This research underscores the potential of these models in enhancing the diagnosis and treatment processes for patients, benefiting both patients and medical professionals alike. The study highlights the significance of selecting appropriate models, such as ResNet, to improve accuracy in image recognition tasks.

16.
Medicina (Kaunas) ; 60(9)2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39336534

RESUMO

Background/Objectives: To develop a deep learning model for esophageal motility disorder diagnosis using high-resolution manometry images with the aid of Gemini. Methods: Gemini assisted in developing this model by aiding in code writing, preprocessing, model optimization, and troubleshooting. Results: The model demonstrated an overall precision of 0.89 on the testing set, with an accuracy of 0.88, a recall of 0.88, and an F1-score of 0.885. It presented better results for multiple categories, particularly in the panesophageal pressurization category, with precision = 0.99 and recall = 0.99, yielding a balanced F1-score of 0.99. Conclusions: This study demonstrates the potential of artificial intelligence, particularly Gemini, in aiding the creation of robust deep learning models for medical image analysis, solving not just simple binary classification problems but more complex, multi-class image classification tasks.


Assuntos
Aprendizado Profundo , Transtornos da Motilidade Esofágica , Manometria , Humanos , Manometria/métodos , Transtornos da Motilidade Esofágica/diagnóstico , Transtornos da Motilidade Esofágica/classificação , Transtornos da Motilidade Esofágica/fisiopatologia , Processamento de Imagem Assistida por Computador/métodos , Esôfago/diagnóstico por imagem , Esôfago/fisiopatologia , Esôfago/fisiologia
17.
Med Biol Eng Comput ; 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39343842

RESUMO

Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed.

18.
J Oral Pathol Med ; 53(9): 551-566, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39256895

RESUMO

BACKGROUND: Artificial intelligence (AI)-based tools have shown promise in histopathology image analysis in improving the accuracy of oral squamous cell carcinoma (OSCC) detection with intent to reduce human error. OBJECTIVES: This systematic review and meta-analysis evaluated deep learning (DL) models for OSCC detection on histopathology images by assessing common diagnostic performance evaluation metrics for AI-based medical image analysis studies. METHODS: Diagnostic accuracy studies that used DL models for the analysis of histopathological images of OSCC compared to the reference standard were analyzed. Six databases (PubMed, Google Scholar, Scopus, Embase, ArXiv, and IEEE) were screened for publications without any time limitation. The QUADAS-2 tool was utilized to assess quality. The meta-analyses included only studies that reported true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) in their test sets. RESULTS: Of 1267 screened studies, 17 studies met the final inclusion criteria. DL methods such as image classification (n = 11) and segmentation (n = 3) were used, and some studies used combined methods (n = 3). On QUADAS-2 assessment, only three studies had a low risk of bias across all applicability domains. For segmentation studies, 0.97 was reported for accuracy, 0.97 for sensitivity, 0.98 for specificity, and 0.92 for Dice. For classification studies, accuracy was reported as 0.99, sensitivity 0.99, specificity 1.0, Dice 0.95, F1 score 0.98, and AUC 0.99. Meta-analysis showed pooled estimates of 0.98 sensitivity and 0.93 specificity. CONCLUSION: Application of AI-based classification and segmentation methods on image analysis represents a fundamental shift in digital pathology. DL approaches demonstrated significantly high accuracy for OSCC detection on histopathology, comparable to that of human experts in some studies. Although AI-based models cannot replace a well-trained pathologist, they can assist through improving the objectivity and repeatability of the diagnosis while reducing variability and human error as a consequence of pathologist burnout.


Assuntos
Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias Bucais , Humanos , Neoplasias Bucais/patologia , Neoplasias Bucais/diagnóstico por imagem , Carcinoma de Células Escamosas/patologia , Carcinoma de Células Escamosas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Inteligência Artificial
19.
Data Brief ; 57: 110893, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39328969

RESUMO

Deep learning applied to raw data has demonstrated outstanding image classification performance, mainly when abundant data is available. However, performance significantly degrades when a substantial volume of data is unavailable. Furthermore, deep architectures struggle to achieve satisfactory performance levels when distinguishing between distinct classes, such as fine-grained image classification, is challenging. Utilizing a priori knowledge alongside raw data can enhance image classification in demanding scenarios. Nevertheless, only a limited number of image classification datasets given with a priori knowledge are currently available, thereby restricting research efforts in this field. This paper introduces innovative datasets for the classification problem that integrate a priori knowledge. These datasets are built from existing data typically employed for multilabel multiclass classification or object detection. Frequent closed itemset mining is used to create classes and their corresponding attributes (e.g. the presence of an object in an image) and then to extract a priori knowledge expressed by rules on these attributes. The algorithm for generating rules is described.

20.
J Biomed Inform ; 158: 104728, 2024 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-39307515

RESUMO

OBJECTIVE: Histological classification is a challenging task due to the diverse appearances, unpredictable variations, and blurry edges of histological tissues. Recently, many approaches based on large networks have achieved satisfactory performance. However, most of these methods rely heavily on substantial computational resources and large high-quality datasets, limiting their practical application. Knowledge Distillation (KD) offers a promising solution by enabling smaller networks to achieve performance comparable to that of larger networks. Nonetheless, KD is hindered by the problem of high-dimensional characteristics, which makes it difficult to capture tiny scattered features and often leads to the loss of edge feature relationships. METHODS: A novel cross-domain visual prompting distillation approach is proposed, compelling the teacher network to facilitate the extraction of significant high-dimensional features into low-dimensional feature maps, thereby aiding the student network in achieving superior performance. Additionally, a dynamic learnable temperature module based on novel vector-based spatial proximity is introduced to further encourage the student to imitate the teacher. RESULTS: Experiments conducted on widely accepted histological datasets, NCT-CRC-HE-100K and LC25000, demonstrate the effectiveness of the proposed method and validate its robustness on the popular dermoscopic dataset ISIC-2019. Compared to state-of-the-art knowledge distillation methods, the proposed method achieves better performance and greater robustness with optimal domain adaptation. CONCLUSION: A novel distillation architecture, termed VPSP, tailored for histological classification, is proposed. This architecture achieves superior performance with optimal domain adaptation, enhancing the clinical application of histological classification. The source code will be released at https://github.com/xiaohongji/VPSP.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA