Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Data Brief ; 55: 110720, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39100779

RESUMO

Accurate inspection of rebars in Reinforced Concrete (RC) structures is essential and requires careful counting. Deep learning algorithms utilizing object detection can facilitate this process through Unmanned Aerial Vehicle (UAV) imagery. However, their effectiveness depends on the availability of large, diverse, and well-labelled datasets. This article details the creation of a dataset specifically for counting rebars using deep learning-based object detection methods. The dataset comprises 874 raw images, divided into three subsets: 524 images for training (60 %), 175 for validation (20 %), and 175 for testing (20 %). To enhance the training data, we applied eight augmentation techniques-brightness, contrast, perspective, rotation, scale, shearing, translation, and blurring-exclusively to the training subset. This resulted in nine distinct datasets: one for each augmentation technique and one combining all techniques in augmentation sets. Expert annotators labelled the dataset in VOC XML format. While this research focuses on rebar counting, the raw dataset can be adapted for other tasks, such as estimating rebar diameter or classifying rebar shapes, by providing the necessary annotations.

2.
Data Brief ; 55: 110633, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39035836

RESUMO

This data article presents a comprehensive dataset comprising breast cancer images collected from patients, encompassing two distinct sets: one from individuals diagnosed with breast cancer and another from those without the condition. Expert physicians carefully select, verify, and categorize the dataset to guarantee its quality and dependability for use in research and teaching. The dataset, which originates from Sulaymaniyah, Iraq, provides a distinctive viewpoint on the frequency and features of breast cancer in the area. This dataset offers a wealth of information for developing and testing deep learning algorithms for identifying breast cancer, with 745 original images and 9,685 augmented images. The addition of augmented X-rays to the dataset increases its adaptability for algorithm development and instructional projects. This dataset holds immense potential for advancing medical research, aiding in the development of innovative diagnostic tools, and fostering educational opportunities for medical students interested in breast cancer detection and diagnosis.

3.
Neural Netw ; 178: 106467, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38908168

RESUMO

In recent years, the research on transferable feature-level adversarial attack has become a hot spot due to attacking unknown deep neural networks successfully. But the following problems limit its transferability. Existing feature disruption methods often focus on computing feature weights precisely, while overlooking the noise influence of feature maps, which results in disturbing non-critical features. Meanwhile, geometric augmentation algorithms are used to enhance image diversity but compromise information integrity, which hamper models from capturing comprehensive features. Furthermore, current feature perturbation could not pay attention to the density distribution of object-relevant key features, which mainly concentrate in salient region and fewer in the most distributed background region, and get limited transferability. To tackle these challenges, a feature distribution-aware transferable adversarial attack method, called FDAA, is proposed to implement distinct strategies for different image regions in the paper. A novel Aggregated Feature Map Attack (AFMA) is presented to significantly denoise feature maps, and an input transformation strategy, called Smixup, is introduced to help feature disruption algorithms to capture comprehensive features. Extensive experiments demonstrate that scheme proposed achieves better transferability with an average success rate of 78.6% on adversarially trained models.

4.
IEEE Open J Eng Med Biol ; 5: 353-361, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38899027

RESUMO

Goal: In recent years, deep neural networks have consistently outperformed previously proposed methods in the domain of medical segmentation. However, due to their nature, these networks often struggle to delineate desired structures in data that fall outside their training distribution. The goal of this study is to address the challenges associated with domain generalization in CT segmentation by introducing a novel method called BucketAugment for deep neural networks. Methods: BucketAugment leverages principles from the Q-learning algorithm and employs validation loss to search for an optimal policy within a search space comprised of distributed stacks of 3D volumetric augmentations, termed 'buckets.' These buckets have tunable parameters and can be seamlessly integrated into existing neural network architectures, offering flexibility for customization. Results: In our experiments, we focus on segmenting kidney and liver structures across three distinct medical datasets, each containing CT scans of the abdominal region collected from various clinical institutions and scanner vendors. Our results indicate that BucketAugment significantly enhances domain generalization across diverse medical datasets, requiring only minimal modifications to existing network architectures. Conclusions: The introduction of BucketAugment provides a promising solution to the challenges of domain generalization in CT segmentation. By leveraging Q-learning principles and distributed stacks of 3D augmentations, this method improves the performance of deep neural networks on medical segmentation tasks, demonstrating its potential to enhance the applicability of such models across different datasets and clinical scenarios.

5.
J Sci Food Agric ; 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38877787

RESUMO

BACKGROUND: With the rapid development of deep learning, the recognition of rice disease images using deep neural networks has become a hot research topic. However, most previous studies only focus on the modification of deep learning models, while lacking research to systematically and scientifically explore the impact of different data sizes on the image recognition task for rice diseases. In this study, a functional model was developed to predict the relationship between the size of dataset and the accuracy rate of model recognition. RESULTS: Training VGG16 deep learning models with different quantities of images of rice blast-diseased leaves and healthy rice leaves, it was found that the test accuracy of the resulting models could be well fitted with an exponential model (A = 0.9965 - e(-0.0603×I50-1.6693)). Experimental results showed that with an increase of image quantity, the recognition accuracy of deep learning models would show a rapid increase at first. Yet when the image quantity increases beyond a certain threshold, the accuracy of image classification would not improve much, and the marginal benefit would be reduced. This trend remained similar when the composition of the dataset was changed, no matter whether (i) the disease class was changed, (ii) the number of classes was increased or (iii) the image data were augmented. CONCLUSIONS: This study provided a scientific basis for the impact of data size on the accuracy of rice disease image recognition, and may also serve as a reference for researchers for database construction. © 2024 Society of Chemical Industry.

6.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38339665

RESUMO

This paper introduces a noise augmentation technique designed to enhance the robustness of state-of-the-art (SOTA) deep learning models against degraded image quality, a common challenge in long-term recording systems. Our method, demonstrated through the classification of digital holographic images, utilizes a novel approach to synthesize and apply random colored noise, addressing the typically encountered correlated noise patterns in such images. Empirical results show that our technique not only maintains classification accuracy in high-quality images but also significantly improves it when given noisy inputs without increasing the training time. This advancement demonstrates the potential of our approach for augmenting data for deep learning models to perform effectively in production under varied and suboptimal conditions.

7.
JACC Case Rep ; 26: 102041, 2023 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-38094175

RESUMO

We demonstrated a first-in-human case of successful antegrade dissection and re-entry using an image-guided re-entry catheter that enables real-time high-resolution visualization with graphical augmentation, and precision steering and advancement of a guidewire. The total time from over-the-wire deployment in the proximity of the distal cap to successful re-entry was <20 minutes. (Level of Difficulty: Advanced.).

8.
Heliyon ; 9(11): e21176, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38027689

RESUMO

Cosmetics consumers need to be aware of their skin type before purchasing products. Identifying skin types can be challenging, especially when they vary from oily to dry in different areas, with skin specialist providing more accurate results. In recent years, artificial intelligence and machine learning have been utilized across various fields, including medicine, to assist in identifying and predicting situations. This study developed a skin type classification model using a Convolutional Neural Networks (CNN) deep learning algorithms. The dataset consisted of normal, oily, and dry skin images, with 112 images for normal skin, 120 images for oily skin, and 97 images for dry skin. Image quality was enhanced using the Contrast Limited Adaptive Histogram Equalization (CLAHE) technique, with data augmentation by rotation applied to increase dataset variety, resulting in a total of 1,316 images. CNN architectures including MobileNet-V2, EfficientNet-V2, InceptionV2, and ResNet-V1 were optimized and evaluated. Findings showed that the EfficientNet-V2 architecture performed the best, achieving an accuracy of 91.55% with average loss of 22.74%. To further improve the model, hyperparameter tuning was conducted, resulting in an accuracy of 94.57% and a loss of 13.77%. The Model performance was validated using 10-fold cross-validation and tested on unseen data, achieving an accuracy of 89.70% with a loss of 21.68%.

9.
Biomimetics (Basel) ; 8(6)2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37887611

RESUMO

Intelligent video surveillance plays a pivotal role in enhancing the infrastructure of smart urban environments. The seamless integration of multi-angled cameras, functioning as perceptive sensors, significantly enhances pedestrian detection and augments security measures in smart cities. Nevertheless, current pedestrian-focused target detection encounters challenges such as slow detection speeds and increased costs. To address these challenges, we introduce the YOLOv5-MS model, an YOLOv5-based solution for target detection. Initially, we optimize the multi-threaded acquisition of video streams within YOLOv5 to ensure image stability and real-time performance. Subsequently, leveraging reparameterization, we replace the original BackBone convolution with RepvggBlock, streamlining the model by reducing convolutional layer channels, thereby enhancing the inference speed. Additionally, the incorporation of a bioinspired "squeeze and excitation" module in the convolutional neural network significantly enhances the detection accuracy. This module improves target focusing and diminishes the influence of irrelevant elements. Furthermore, the integration of the K-means algorithm and bioinspired Retinex image augmentation during training effectively enhances the model's detection efficacy. Finally, loss computation adopts the Focal-EIOU approach. The empirical findings from our internally developed smart city dataset unveil YOLOv5-MS's impressive 96.5% mAP value, indicating a significant 2.0% advancement over YOLOv5s. Moreover, the average inference speed demonstrates a notable 21.3% increase. These data decisively substantiate the model's superiority, showcasing its capacity to effectively perform pedestrian detection within an Intranet of over 50 video surveillance cameras, in harmony with our stringent requisites.

10.
Front Public Health ; 11: 1225478, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37841722

RESUMO

Introduction: Falls from height (FFH) accidents can devastate families and individuals. Currently, the best way to prevent falls from heights is to wear personal protective equipment (PPE). However, traditional manual checking methods for safety hazards are inefficient and difficult to detect and eliminate potential risks. Methods: To better detect whether a person working at height is wearing PPE or not, this paper first applies field research and Python crawling techniques to create a dataset of people working at height, extends the dataset to 10,000 images through data enhancement (brightness, rotation, blurring, and Moica), and categorizes the dataset into a training set, a validation set, and a test set according to the ratio of 7:2:1. In this study, three improved YOLOv5s models are proposed for detecting PPE in construction sites with many open-air operations, complex construction scenarios, and frequent personnel changes. Among them, YOLOv5s-gnconv is wholly based on the convolutional structure, which achieves effective modeling of higher-order spatial interactions through gated convolution (gnConv) and cyclic design, improves the performance of the algorithm, and increases the expressiveness of the model while reducing the network parameters. Results: Experimental results show that YOLOv5s-gnconv outperforms the official model YOLOv5s by 5.01%, 4.72%, and 4.26% in precision, recall, and mAP_0.5, respectively. It better ensures the safety of workers working at height. Discussion: To deploy the YOLOv5s-gnConv model in a construction site environment and to effectively monitor and manage the safety of workers at height, we also discuss the impacts and potential limitations of lighting conditions, camera angles, and worker movement patterns.


Assuntos
Acidentes por Quedas , Algoritmos , Humanos , Equipamento de Proteção Individual
11.
Front Artif Intell ; 6: 1200977, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37483870

RESUMO

Introduction: Machine learning tasks often require a significant amount of training data for the resultant network to perform suitably for a given problem in any domain. In agriculture, dataset sizes are further limited by phenotypical differences between two plants of the same genotype, often as a result of different growing conditions. Synthetically-augmented datasets have shown promise in improving existing models when real data is not available. Methods: In this paper, we employ a contrastive unpaired translation (CUT) generative adversarial network (GAN) and simple image processing techniques to translate indoor plant images to appear as field images. While we train our network to translate an image containing only a single plant, we show that our method is easily extendable to produce multiple-plant field images. Results: Furthermore, we use our synthetic multi-plant images to train several YoloV5 nano object detection models to perform the task of plant detection and measure the accuracy of the model on real field data images. Discussion: The inclusion of training data generated by the CUT-GAN leads to better plant detection performance compared to a network trained solely on real data.

12.
Front Plant Sci ; 14: 1142957, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37484461

RESUMO

This study proposes an adaptive image augmentation scheme using deep reinforcement learning (DRL) to improve the performance of a deep learning-based automated optical inspection system. The study addresses the challenge of inconsistency in the performance of single image augmentation methods. It introduces a DRL algorithm, DQN, to select the most suitable augmentation method for each image. The proposed approach extracts geometric and pixel indicators to form states, and uses DeepLab-v3+ model to verify the augmented images and generate rewards. Image augmentation methods are treated as actions, and the DQN algorithm selects the best methods based on the images and segmentation model. The study demonstrates that the proposed framework outperforms any single image augmentation method and achieves better segmentation performance than other semantic segmentation models. The framework has practical implications for developing more accurate and robust automated optical inspection systems, critical for ensuring product quality in various industries. Future research can explore the generalizability and scalability of the proposed framework to other domains and applications. The code for this application is uploaded at https://github.com/lynnkobe/Adaptive-Image-Augmentation.git.

13.
Multimed Tools Appl ; : 1-16, 2023 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-37362733

RESUMO

The ability of Advanced Driving Assistance Systems (ADAS) is to identify and understand all objects around the vehicle under varying driving conditions and environmental factors is critical. Today's vehicles are equipped with advanced driving assistance systems that make driving safer and more comfortable. A camera mounted on the car helps the system recognise and detect traffic signs and alerts the driver about various road conditions, like if construction work is ahead or if speed limits have changed. The goal is to identify the traffic sign and process the image in a minimal processing time. A custom convolutional neural network model is used to classify the traffic signs with higher accuracy than the existing models. Image augmentation techniques are used to expand the dataset artificially, and that allows one to learn how the image looks from different perspectives, such as when viewed from different angles or when it looks blurry due to poor weather conditions. The algorithms used to detect traffic signs are YOLO v3 and YOLO v4-tiny. The proposed solution for detecting a specific set of traffic signs performed well, with an accuracy rate of 95.85%.

14.
PeerJ Comput Sci ; 9: e1318, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346635

RESUMO

Machine learning applications in the medical sector face a lack of medical data due to privacy issues. For instance, brain tumor image-based classification suffers from the lack of brain images. The lack of such images produces some classification problems, i.e., class imbalance issues which can cause a bias toward one class over the others. This study aims to solve the imbalance problem of the "no tumor" class in the publicly available brain magnetic resonance imaging (MRI) dataset. Generative adversarial network (GAN)-based augmentation techniques were used to solve the imbalance classification problem. Specifically, deep convolutional GAN (DCGAN) and single GAN (SinGAN). Moreover, the traditional-based augmentation techniques were implemented using the rotation method. Thus, several VGG16 classification experiments were conducted, including (i) the original dataset, (ii) the DCGAN-based dataset, (iii) the SinGAN-based dataset, (iv) a combination of the DCGAN and SinGAN dataset, and (v) the rotation-based dataset. However, the results show that the original dataset achieved the highest accuracy, 73%. Additionally, SinGAN outperformed DCGAN by a significant margin of 4%. In contrast, experimenting with the non-augmented original dataset resulted in the highest classification loss value, which explains the effect of the imbalance issue. These results provide a general view of the effect of different image augmentation techniques on enlarging the healthy brain dataset.

15.
Animals (Basel) ; 13(9)2023 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-37174563

RESUMO

Accurate identification of animal species is necessary to understand biodiversity richness, monitor endangered species, and study the impact of climate change on species distribution within a specific region. Camera traps represent a passive monitoring technique that generates millions of ecological images. The vast numbers of images drive automated ecological analysis as essential, given that manual assessment of large datasets is laborious, time-consuming, and expensive. Deep learning networks have been advanced in the last few years to solve object and species identification tasks in the computer vision domain, providing state-of-the-art results. In our work, we trained and tested machine learning models to classify three animal groups (snakes, lizards, and toads) from camera trap images. We experimented with two pretrained models, VGG16 and ResNet50, and a self-trained convolutional neural network (CNN-1) with varying CNN layers and augmentation parameters. For multiclassification, CNN-1 achieved 72% accuracy, whereas VGG16 reached 87%, and ResNet50 attained 86% accuracy. These results demonstrate that the transfer learning approach outperforms the self-trained model performance. The models showed promising results in identifying species, especially those with challenging body sizes and vegetation.

16.
Biocybern Biomed Eng ; 43(1): 352-368, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36819118

RESUMO

Background and Objective: The global population has been heavily impacted by the COVID-19 pandemic of coronavirus. Infections are spreading quickly around the world, and new spikes (Delta, Delta Plus, and Omicron) are still being made. The real-time reverse transcription-polymerase chain reaction (RT-PCR) is the method most often used to find viral RNA in a nasopharyngeal swab. However, these diagnostic approaches require human involvement and consume more time per prediction. Moreover, the existing conventional test mainly suffers from false negatives, so there is a chance for the virus to spread quickly. Therefore, a rapid and early diagnosis of COVID-19 patients is needed to overcome these problems. Methods: Existing approaches based on deep learning for COVID detection are suffering from unbalanced datasets, poor performance, and gradient vanishing problems. A customized skip connection-based network with a feature union approach has been developed in this work to overcome some of the issues mentioned above. Gradient information from chest X-ray (CXR) images to subsequent layers is bypassed through skip connections. In the script's title, "SCovNet" refers to a skip-connection-based feature union network for detecting COVID-19 in a short notation. The performance of the proposed model was tested with two publicly available CXR image databases, including balanced and unbalanced datasets. Results: A modified skip connection-based CNN model was suggested for a small unbalanced dataset (Kaggle) and achieved remarkable performance. In addition, the proposed model was also tested with a large GitHub database of CXR images and obtained an overall best accuracy of 98.67% with an impressive low false-negative rate of 0.0074. Conclusions: The results of the experiments show that the proposed method works better than current methods at finding early signs of COVID-19. As an additional point of interest, we must mention the innovative hierarchical classification strategy provided for this work, which considered both balanced and unbalanced datasets to get the best COVID-19 identification rate.

17.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850460

RESUMO

Surface defect identification based on computer vision algorithms often leads to inadequate generalization ability due to large intraclass variation. Diversity in lighting conditions, noise components, defect size, shape, and position make the problem challenging. To solve the problem, this paper develops a pixel-level image augmentation method that is based on image-to-image translation with generative adversarial neural networks (GANs) conditioned on fine-grained labels. The GAN model proposed in this work, referred to as Magna-Defect-GAN, is capable of taking control of the image generation process and producing image samples that are highly realistic in terms of variations. Firstly, the surface defect dataset based on the magnetic particle inspection (MPI) method is acquired in a controlled environment. Then, the Magna-Defect-GAN model is trained, and new synthetic image samples with large intraclass variations are generated. These synthetic image samples artificially inflate the training dataset size in terms of intraclass diversity. Finally, the enlarged dataset is used to train a defect identification model. Experimental results demonstrate that the Magna-Defect-GAN model can generate realistic and high-resolution surface defect images up to the resolution of 512 × 512 in a controlled manner. We also show that this augmentation method can boost accuracy and be easily adapted to any other surface defect identification models.

18.
Diagnostics (Basel) ; 13(4)2023 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-36832221

RESUMO

Small bowel polyps exhibit variations related to color, shape, morphology, texture, and size, as well as to the presence of artifacts, irregular polyp borders, and the low illumination condition inside the gastrointestinal GI tract. Recently, researchers developed many highly accurate polyp detection models based on one-stage or two-stage object detector algorithms for wireless capsule endoscopy (WCE) and colonoscopy images. However, their implementation requires a high computational power and memory resources, thus sacrificing speed for an improvement in precision. Although the single-shot multibox detector (SSD) proves its effectiveness in many medical imaging applications, its weak detection ability for small polyp regions persists due to the lack of information complementary between features of low- and high-level layers. The aim is to consecutively reuse feature maps between layers of the original SSD network. In this paper, we propose an innovative SSD model based on a redesigned version of a dense convolutional network (DenseNet) which emphasizes multiscale pyramidal feature maps interdependence called DC-SSDNet (densely connected single-shot multibox detector). The original backbone network VGG-16 of the SSD is replaced with a modified version of DenseNet. The DenseNet-46 front stem is improved to extract highly typical characteristics and contextual information, which improves the model's feature extraction ability. The DC-SSDNet architecture compresses unnecessary convolution layers of each dense block to reduce the CNN model complexity. Experimental results showed a remarkable improvement in the proposed DC-SSDNet to detect small polyp regions achieving an mAP of 93.96%, F1-score of 90.7%, and requiring less computational time.

19.
Comput Med Imaging Graph ; 104: 102161, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36603372

RESUMO

Various deep learning (DL) models are widely applied in medical image analysis, and their performance depends on the scale and diversity of available training data. However, medical images often suffer from difficulty in data acquisition, imbalance in sample categories, and high cost of labeling. In addition, most image augmentation approaches mainly focus on image synthesis only for classification tasks, and rarely consider the synthetic image-label pairs for image segmentation tasks. In this paper, we focus on the medical image augmentation for DL-based image segmentation and the synchronization between augmented image samples and their labels. We design a Synchronous Medical Image Augmentation (SMIA) framework, which includes two modules based on stochastic transformation and synthesis, and provides diverse and annotated training sets for DL models. In the transform-based SMIA module, for each medical image sample and its tissue segments, a subset of SMIA factors with a random number of factors and stochastic parameter values are selected to simultaneously generate augmented samples and the paired tissue segments. In the synthesis-based SMIA module, we randomly replace the original tissues with the augmented tissues using an equivalent replacement method to synthesize new medical images, which can well maintain the original medical implications. DL-based image segmentation experiments on bone marrow smear and dermoscopic images demonstrate that the proposed SMIA framework can generate category-balanced and diverse training data, and have a positive impact on the performance of the models.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador
20.
BioData Min ; 16(1): 2, 2023 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-36694237

RESUMO

BACKGROUND: Anemia is one of the global public health problems that affect children and pregnant women. Anemia occurs when the level of red blood cells within the body decreases or when the structure of the red blood cells is destroyed or when the Hb level in the red blood cell is below the normal threshold, which results from one or more increased red cell destructions, blood loss, defective cell production or a depleted sum of Red Blood Cells. METHODS: The method used in this study is divided into three phases: the datasets were gathered, which is the palm, pre-processed the image, which comprised; Extracted images, and augmented images, segmented the Region of Interest of the images and acquired their various components of the CIE L*a*b* colour space (also referred to as the CIELAB), and finally developed the proposed models for the detection of anemia using the various algorithms, which include CNN, k-NN, Nave Bayes, SVM, and Decision Tree. The experiment utilized 527 initial datasets, rotation, flipping and translation were utilized and augmented the dataset to 2635. We randomly divided the augmented dataset into 70%, 10%, and 20% and trained, validated and tested the models respectively. RESULTS: The results of the study justify that the models performed appropriately when the palm is used to detect anemia, with the Naïve Bayes achieving a 99.96% accuracy while the SVM achieved the lowest accuracy of 96.34%, as the CNN also performed better with an accuracy of 99.92% in detecting anemia. CONCLUSIONS: The invasive method of detecting anemia is expensive and time-consuming; however, anemia can be detected through the use of non-invasive methods such as machine learning algorithms which is efficient, cost-effective and takes less time. In this work, we compared machine learning models such as CNN, k-NN, Decision Tree, Naïve Bayes, and SVM to detect anemia using images of the palm. Finally, the study supports other similar studies on the potency of the Machine Learning Algorithm as a non-invasive method in detecting iron deficiency anemia.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA