ABSTRACT
PURPOSE: The structural similarity index measure (SSIM) has become a popular quality metric to evaluate QSM in a way that is closer to human perception than RMS error (RMSE). However, SSIM may overpenalize errors in diamagnetic tissues and underpenalize them in paramagnetic tissues, resulting in biasing. In addition, extreme artifacts may compress the dynamic range, resulting in unrealistically high SSIM scores (hacking). To overcome biasing and hacking, we propose XSIM: SSIM implemented in the native QSM range, and with internal parameters optimized for QSM. METHODS: We used forward simulations from a COSMOS ground-truth brain susceptibility map included in the 2016 QSM Reconstruction Challenge to investigate the effect of QSM reconstruction errors on the SSIM, XSIM, and RMSE metrics. We also used these metrics to optimize QSM reconstructions of the in vivo challenge data set. We repeated this experiment with the QSM abdominal phantom. To validate the use of XSIM instead of SSIM for QSM quality assessment across a range of different reconstruction techniques/algorithms, we analyzed the reconstructions submitted to the 2019 QSM Reconstruction Challenge 2.0. RESULTS: Our experiments confirmed the biasing and hacking effects on the SSIM metric applied to QSM. The XSIM metric was robust to those effects, penalizing the presence of streaking artifacts and reconstruction errors. Using XSIM to optimize QSM reconstruction regularization weights returned less overregularization than SSIM and RMSE. CONCLUSION: XSIM is recommended over traditional SSIM to evaluate QSM reconstructions against a known ground truth, as it avoids biasing and hacking effects and provides a larger dynamic range of scores.
Subject(s)
Algorithms , Brain , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Phantoms, Imaging , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Artifacts , Computer Simulation , Reproducibility of Results , Abdomen/diagnostic imagingABSTRACT
BACKGROUND: Battling malaria's morbidity and mortality rates demands innovative methods related to malaria diagnosis. Thick blood smears (TBS) are the gold standard for diagnosing malaria, but their coloration quality is dependent on supplies and adherence to standard protocols. Machine learning has been proposed to automate diagnosis, but the impact of smear coloration on parasite detection has not yet been fully explored. METHODS: To develop Coloration Analysis in Malaria (CAM), an image database containing 600 images was created. The database was randomly divided into training (70%), validation (15%), and test (15%) sets. Nineteen feature vectors were studied based on variances, correlation coefficients, and histograms (specific variables from histograms, full histograms, and principal components from the histograms). The Machine Learning Matlab Toolbox was used to select the best candidate feature vectors and machine learning classifiers. The candidate classifiers were then tuned for validation and tested to ultimately select the best one. RESULTS: This work introduces CAM, a machine learning system designed for automatic TBS image quality analysis. The results demonstrated that the cubic SVM classifier outperformed others in classifying coloration quality in TBS, achieving a true negative rate of 95% and a true positive rate of 97%. CONCLUSIONS: An image-based approach was developed to automatically evaluate the coloration quality of TBS. This finding highlights the potential of image-based analysis to assess TBS coloration quality. CAM is intended to function as a supportive tool for analyzing the coloration quality of thick blood smears.
Subject(s)
Image Processing, Computer-Assisted , Machine Learning , Image Processing, Computer-Assisted/methods , Humans , Malaria , ColorABSTRACT
Stroke, the second leading cause of mortality globally, predominantly results from ischemic conditions. Immediate attention and diagnosis, related to the characterization of brain lesions, play a crucial role in patient prognosis. Standard stroke protocols include an initial evaluation from a non-contrast CT to discriminate between hemorrhage and ischemia. However, non-contrast CTs lack sensitivity in detecting subtle ischemic changes in this phase. Alternatively, diffusion-weighted MRI studies provide enhanced capabilities, yet are constrained by limited availability and higher costs. Hence, we idealize new approaches that integrate ADC stroke lesion findings into CT, to enhance the analysis and accelerate stroke patient management. This study details a public challenge where scientists applied top computational strategies to delineate stroke lesions on CT scans, utilizing paired ADC information. Also, it constitutes the first effort to build a paired dataset with NCCT and ADC studies of acute ischemic stroke patients. Submitted algorithms were validated with respect to the references of two expert radiologists. The best achieved Dice score was 0.2 over a test study with 36 patient studies. Despite all the teams employing specialized deep learning tools, results reveal limitations of computational approaches to support the segmentation of small lesions with heterogeneous density.
Subject(s)
Ischemic Stroke , Tomography, X-Ray Computed , Humans , Ischemic Stroke/diagnostic imaging , Tomography, X-Ray Computed/methods , Magnetic Resonance Imaging/methods , Algorithms , Diffusion Magnetic Resonance Imaging/methods , Brain Ischemia/diagnostic imaging , Male , Female , Aged , Image Processing, Computer-Assisted/methods , Deep Learning , Stroke/diagnostic imaging , Brain/diagnostic imaging , Brain/pathologyABSTRACT
The use of artificial intelligence algorithms (AI) has gained importance for dental applications in recent years. Analyzing AI information from different sensor data such as images or panoramic radiographs (panoramic X-rays) can help to improve medical decisions and achieve early diagnosis of different dental pathologies. In particular, the use of deep learning (DL) techniques based on convolutional neural networks (CNNs) has obtained promising results in dental applications based on images, in which approaches based on classification, detection, and segmentation are being studied with growing interest. However, there are still several challenges to be tackled, such as the data quality and quantity, the variability among categories, and the analysis of the possible bias and variance associated with each dataset distribution. This study aims to compare the performance of three deep learning object detection models-Faster R-CNN, YOLO V2, and SSD-using different ResNet architectures (ResNet-18, ResNet-50, and ResNet-101) as feature extractors for detecting and classifying third molar angles in panoramic X-rays according to Winter's classification criterion. Each object detection architecture was trained, calibrated, validated, and tested with three different feature extraction CNNs which are ResNet-18, ResNet-50, and ResNet-101, which were the networks that best fit our dataset distribution. Based on such detection networks, we detect four different categories of angles in third molars using panoramic X-rays by using Winter's classification criterion. This criterion characterizes the third molar's position relative to the second molar's longitudinal axis. The detected categories for the third molars are distoangular, vertical, mesioangular, and horizontal. For training, we used a total of 644 panoramic X-rays. The results obtained in the testing dataset reached up to 99% mean average accuracy performance, demonstrating the YOLOV2 obtained higher effectiveness in solving the third molar angle detection problem. These results demonstrate that the use of CNNs for object detection in panoramic radiographs represents a promising solution in dental applications.
Subject(s)
Deep Learning , Molar, Third , Neural Networks, Computer , Radiography, Panoramic , Radiography, Panoramic/methods , Humans , Molar, Third/diagnostic imaging , Algorithms , Artificial Intelligence , Image Processing, Computer-Assisted/methodsABSTRACT
Smart indoor tourist attractions, such as smart museums and aquariums, require a significant investment in indoor localization devices. The use of Global Positioning Systems on smartphones is unsuitable for scenarios where dense materials such as concrete and metal blocks weaken GPS signals, which is most often the case in indoor tourist attractions. With the help of deep learning, indoor localization can be done region by region using smartphone images. This approach requires no investment in infrastructure and reduces the cost and time needed to turn museums and aquariums into smart museums or smart aquariums. In this paper, we propose using deep learning algorithms to classify locations based on smartphone camera images for indoor tourist attractions. We evaluate our proposal in a real-world scenario in Brazil. We extensively collect images from ten different smartphones to classify biome-themed fish tanks in the Pantanal Biopark, creating a new dataset of 3654 images. We tested seven state-of-the-art neural networks, three of them based on transformers. On average, we achieved a precision of about 90% and a recall and f-score of about 89%. The results show that the proposal is suitable for most indoor tourist attractions.
Subject(s)
Deep Learning , Smartphone , Tourism , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Geographic Information Systems , BrazilABSTRACT
Introducción: el alargamiento de corona relacionado con la estética tiene como objetivo revelar una longitud adecuada de la corona y disminuir la exposición gingival. El procedimiento incluye gingivectomía y alveolectomía para restablecer el tejido gingival supracrestal requerido protésicamente según la dimensión fisiológica. Mediante un escaneo intraoral del maxilar, mandíbula y posición intercuspídea máxima y utilizando un software especializado, se diseña de manera digital la forma de los dientes y el contorno gingival. Este diseño genera una imagen de la restauración estética deseada para el prostodoncista y periodoncista. La fabricación física del diseño es asistida por computadora (CAD-CAM), creando una guía quirúrgica de resina acrílica moldeada al vacío para determinar la longitud de la corona clínica requerida en la cirugía. Objetivo: el caso interdisciplinario presentado describe una técnica innovadora empleando el flujo digital mediante un software que, a partir de un escaneo oral, diseña de manera digital un Mock-Up utilizado como guía para la cirugía periodontal. Presentación del caso: acude a clínica de la Maestría en Periodoncia de la Universidad Autónoma de Coahuila (UAdeC) paciente de 52 años, ASA I, para una cirugía periodontal con fines estéticos. Se procedió a la cirugía, colocando el Mock-Up en la región anterosuperior como guía para la gingivectomía. Luego, se realizó el levantamiento de colgajo antes de la alveolectomía, tomando en cuenta la longitud del tejido gingival supracrestal. Por último, se suturaron los tejidos blandos con técnica de colchonero horizontal. Resultados: siete días después, se retiran los puntos apreciando una cicatrización correcta y uniforme de los tejidos. Conclusiones: este abordaje digital ofrece una reducción significativa del tiempo quirúrgico, además de una estética satisfactoria y una precisa arquitectura gingival (AU)
Introduction: crown lengthening for aesthetic purposes aims to reveal an adequate crown length and reduce gingival exposure. The procedure includes gingivectomy and alveolectomy to restore the prosthetically required supracrestal gingival tissue according to physiological dimensions. Through an intraoral scan of the maxilla, mandible, and maximum intercuspidal position, and using specialized software, the shape of the teeth and the gingival contour are digitally designed. This design generates an image of the desired aesthetic restoration for the prosthodontist and periodontist. The physical fabrication of the design is computer-assisted (CAD-CAM), creating a vacuum-formed acrylic resin surgical guide to determine the clinical crown length required in surgery. Objective: the presented interdisciplinary case describes an innovative technique using digital workflow through software that, from an oral scan, digitally designs a Mock-Up used as a guide for periodontal surgery. Case presentation: a 52-year-old ASA I patient attended the Periodontics Master's clinic at Universidad Autónoma de Coahuila (UAdeC) for periodontal surgery with aesthetic purposes. The surgery was performed by placing the Mock-Up in the anterosuperior region as a guide for gingivectomy. Then, a flap was raised before the alveolectomy, considering the length of the supracrestal gingival tissue. Finally, the soft tissues were sutured with a horizontal mattress suture technique. Results: seven days later, the sutures were removed, showing correct and uniform tissue healing. Conclusions: this digital approach offers a significant reduction in surgical time, in addition to satisfactory esthetics and precise gingival architecture (AU)
Subject(s)
Humans , Male , Middle Aged , Image Processing, Computer-Assisted/methods , Crown Lengthening , Computer-Aided Design , Imaging, Three-Dimensional/methods , Esthetics, Dental , Schools, Dental , Gingivectomy/methods , MexicoABSTRACT
BACKGROUND: Identifying mosquito vectors is crucial for controlling diseases. Automated identification studies using the convolutional neural network (CNN) have been conducted for some urban mosquito vectors but not yet for sylvatic mosquito vectors that transmit the yellow fever. We evaluated the ability of the AlexNet CNN to identify four mosquito species: Aedes serratus, Aedes scapularis, Haemagogus leucocelaenus and Sabethes albiprivus and whether there is variation in AlexNet's ability to classify mosquitoes based on pictures of four different body regions. METHODS: The specimens were photographed using a cell phone connected to a stereoscope. Photographs were taken of the full-body, pronotum and lateral view of the thorax, which were pre-processed to train the AlexNet algorithm. The evaluation was based on the confusion matrix, the accuracy (ten pseudo-replicates) and the confidence interval for each experiment. RESULTS: Our study found that the AlexNet can accurately identify mosquito pictures of the genus Aedes, Sabethes and Haemagogus with over 90% accuracy. Furthermore, the algorithm performance did not change according to the body regions submitted. It is worth noting that the state of preservation of the mosquitoes, which were often damaged, may have affected the network's ability to differentiate between these species and thus accuracy rates could have been even higher. CONCLUSIONS: Our results support the idea of applying CNNs for artificial intelligence (AI)-driven identification of mosquito vectors of tropical diseases. This approach can potentially be used in the surveillance of yellow fever vectors by health services and the population as well.
Subject(s)
Aedes , Mosquito Vectors , Neural Networks, Computer , Yellow Fever , Animals , Mosquito Vectors/classification , Yellow Fever/transmission , Aedes/classification , Aedes/physiology , Algorithms , Image Processing, Computer-Assisted/methods , Culicidae/classification , Artificial IntelligenceABSTRACT
AIM: CT images can identify structural and opacity alterations of the lungs while nuclear medicine's lung perfusion studies show the homogeneity (or lack of) of blood perfusion on the organ. Therefore, the use of SPECT/CT in lung perfusion scintigraphies can help physicians to assess anatomical and functional alterations of the lungs and to differentiate between acute and chronic disease. OBJECTIVE: To develop a computer-aided methodology to quantify the total global perfusion of the lungs via SPECT/CT images and to compare these results with parenchymal alterations obtained in CT images. METHODS: 39 perfusion SPECT/CT images collected retrospectively from the Nuclear Medicine Facility of Botucatu Medical School's Clinics Hospital in São Paulo, Brazil, were analyzed. Anatomical lung impairments (emphysema, collapsed and infiltrated tissue) and the functional percentage of the lungs (blood perfusion) were quantified from CT and SPECT images, with the aid of the free, open-source software 3D Slicer. The results obtained with 3D Slicer (3D-TGP) were also compared to the total global perfusion of each patient's found on their medical report, obtained from visual inspection of planar images (2D-TGP). RESULTS: This research developed a novel and practical methodology for obtaining lungs' total global perfusion from SPECT/CT images in a semiautomatic manner. 3D-TGP versus 2D-TGP showed a bias of 7% with a variation up to 67% between the two methods. Perfusion percentage showed a weak positive correlation with infiltration (p = 0.0070 and ρ = 0.43) and collapsed parenchyma (p = 0.040 and ρ = 0.33). CONCLUSIONS: This research brings meaningful contributions to the scientific community because it used a free open-source software to quantify the lungs blood perfusion via SPECT/CT images and pointed that the relationship between parenchyma alterations and the organ's perfusion capability might not be so direct, given compensatory mechanisms.
Subject(s)
Lung , Perfusion Imaging , Single Photon Emission Computed Tomography Computed Tomography , Humans , Single Photon Emission Computed Tomography Computed Tomography/methods , Lung/diagnostic imaging , Lung/blood supply , Retrospective Studies , Male , Female , Perfusion Imaging/methods , Middle Aged , Aged , Image Processing, Computer-Assisted/methods , Adult , Aged, 80 and overABSTRACT
PURPOSE: Amid rising health awareness, natural products which has milder effects than medical drugs are becoming popular. However, only few systems can quantitatively assess their impact on living organisms. Therefore, we developed a deep-learning system to automate the counting of cells in a gerbil model, aiming to assess a natural product's effectiveness against ischemia. METHODS: The image acquired from paraffin blocks containing gerbil brains was analyzed by a deep-learning model (fine-tuned Detectron2). RESULTS: The counting system achieved a 79%-positive predictive value and 85%-sensitivity when visual judgment by an expert was used as ground truth. CONCLUSIONS: Our system evaluated hydrogen water's potential against ischemia and found it potentially useful, which is consistent with expert assessment. Due to natural product's milder effects, large data sets are needed for evaluation, making manual measurement labor-intensive. Hence, our system offers a promising new approach for evaluating natural products.
Subject(s)
Brain Ischemia , Disease Models, Animal , Gerbillinae , Animals , Brain Ischemia/pathology , Deep Learning , Brain/pathology , Brain/blood supply , Image Processing, Computer-Assisted/methodsABSTRACT
Plasmodium parasites cause Malaria disease, which remains a significant threat to global health, affecting 200 million people and causing 400,000 deaths yearly. Plasmodium falciparum and Plasmodium vivax remain the two main malaria species affecting humans. Identifying the malaria disease in blood smears requires years of expertise, even for highly trained specialists. Literature studies have been coping with the automatic identification and classification of malaria. However, several points must be addressed and investigated so these automatic methods can be used clinically in a Computer-aided Diagnosis (CAD) scenario. In this work, we assess the transfer learning approach by using well-known pre-trained deep learning architectures. We considered a database with 6222 Region of Interest (ROI), of which 6002 are from the Broad Bioimage Benchmark Collection (BBBC), and 220 were acquired locally by us at Fundação Oswaldo Cruz (FIOCRUZ) in Porto Velho Velho, Rondônia-Brazil, which is part of the legal Amazon. We exhaustively cross-validated the dataset using 100 distinct partitions with 80% train and 20% test for each considering circular ROIs (rough segmentation). Our experimental results show that DenseNet201 has a potential to identify Plasmodium parasites in ROIs (infected or uninfected) of microscopic images, achieving 99.41% AUC with a fast processing time. We further validated our results, showing that DenseNet201 was significantly better (99% confidence interval) than the other networks considered in the experiment. Our results support claiming that transfer learning with texture features potentially differentiates subjects with malaria, spotting those with Plasmodium even in Leukocytes images, which is a challenge. In Future work, we intend scale our approach by adding more data and developing a friendly user interface for CAD use. We aim at aiding the worldwide population and our local natives living nearby the legal Amazon's rivers.
Subject(s)
Microscopy , Humans , Microscopy/methods , Plasmodium falciparum/pathogenicity , Plasmodium vivax , Computational Biology/methods , Malaria/parasitology , Plasmodium , Deep Learning , Databases, Factual , Image Processing, Computer-Assisted/methods , Malaria, Falciparum/parasitology , Diagnosis, Computer-Assisted/methodsABSTRACT
Analyzing tissue microstructure is essential for understanding complex biological systems in different species. Tissue functions largely depend on their intrinsic tissue architecture. Therefore, studying the three-dimensional (3D) microstructure of tissues, such as the liver, is particularly fascinating due to its conserved essential roles in metabolic processes and detoxification. Here, we present TiMiGNet, a novel deep learning approach for virtual 3D tissue microstructure reconstruction using Generative Adversarial Networks and fluorescence microscopy. TiMiGNet overcomes challenges such as poor antibody penetration and time-intensive procedures by generating accurate, high-resolution predictions of tissue components across large volumes without the need of paired images as input. We applied TiMiGNet to analyze tissue microstructure in mouse and human liver tissue. TiMiGNet shows high performance in predicting structures like bile canaliculi, sinusoids, and Kupffer cell shapes from actin meshwork images. Remarkably, using TiMiGNet we were able to computationally reconstruct tissue structures that cannot be directly imaged due experimental limitations in deep dense tissues, a significant advancement in deep tissue imaging. Our open-source virtual prediction tool facilitates accessible and efficient multi-species tissue microstructure analysis, accommodating researchers with varying expertise levels. Overall, our method represents a powerful approach for studying tissue microstructure, with far-reaching applications in diverse biological contexts and species.
Subject(s)
Deep Learning , Liver , Humans , Animals , Mice , Imaging, Three-Dimensional/methods , Microscopy, Fluorescence/methods , Image Processing, Computer-Assisted/methodsABSTRACT
Leaf Area Index (LAI) is the ratio of ground surface area covered by leaves. LAI plays a significant role in the structural characteristics of forest ecosystems. Therefore, an accurate estimation process is needed. One method for estimating LAI is using Digital Cover Photography. However, most applications for processing LAI using digital photos do not consider the brown color of plant parts. Previous research, which includes brown color as part of the calculation, potentially produced biased results by the increased pixel count from the original photo. This study aims to enhance the accuracy of LAI estimation. The proposed methods consider the brown color while minimizing errors. Image processing is carried out in two stages to separate leaves and non-leaf pixels by using the RGB color model for the first stage and applying the CIELAB color model in the second stage. Proposed methods and existing applications are evaluated against the actual LAI value obtained using Terrestrial Laser Scanning (TLS) as the ground truth. The results demonstrate that the proposed methods effectively identify non-leaf parts and exhibit the lowest error rates compared to other methods. In conclusion, this study provides alternative techniques to enhance the accuracy of LAI estimation in forest ecosystems.
Subject(s)
Forests , Image Processing, Computer-Assisted , Photography , Plant Leaves , Plant Leaves/anatomy & histology , Photography/methods , Image Processing, Computer-Assisted/methods , Trees , ColorABSTRACT
Infrared thermography is gaining relevance in breast cancer assessment. For this purpose, breast segmentation in thermograms is an important task for performing automatic image analysis and detecting possible temperature changes that indicate the presence of malignancy. However, it is not a simple task since the breast limit borders, especially the top borders, often have low contrast, making it difficult to isolate the breast area. Several algorithms have been proposed for breast segmentation, but these highly depend on the contrast at the lower breast borders and on filtering algorithms to remove false edges. This work focuses on taking advantage of the distinctive inframammary shape to simplify the definition of the lower breast border, regardless of the contrast level, which indeed also provides a strong anatomical reference to support the definition of the poorly marked upper boundary of the breasts, which has been one of the major challenges in the literature. In order to demonstrate viability of the proposed technique for an automatic breast segmentation, we applied it to a database with 180 thermograms and compared their results with those reported by others in the literature. We found that our approach achieved a high performance, in terms of Intersection over Union of 0.934, even higher than that reported by artificial intelligence algorithms. The performance is invariant to breast sizes and thermal contrast of the images.
Subject(s)
Algorithms , Breast , Thermography , Humans , Thermography/methods , Female , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Infrared Rays , Image Processing, Computer-Assisted/methodsABSTRACT
Given the significant impact of biofilms on human health and material corrosion, research in this field urgently needs more accessible techniques to facilitate the testing of new control agents and general understanding of biofilm biology. Microtiter plates offer a convenient format for standardized evaluations, including high-throughput assays of alternative treatments and molecular modulators. This study introduces a novel Biofilm Analysis Software (BAS) for quantifying biofilms from microtiter plate images. We focused on early biofilm growth stages and compared BAS quantification to common techniques: direct turbidity measurement, intrinsic fluorescence detection linked to pyoverdine production, and standard crystal violet staining which enables image analysis and optical density measurement. We also assessed their sensitivity for detecting subtle growth effects caused by cyclic AMP and gentamicin. Our results show that BAS image analysis is at least as sensitive as the standard method of spectrophotometrically quantifying the crystal violet retained by biofilms. Furthermore, we demonstrated that bacteria adhered after short incubations (from 10 min to 4 h), isolated from planktonic populations by a simple rinse, can be monitored until their growth is detectable by intrinsic fluorescence, BAS analysis, or resolubilized crystal violet. These procedures are widely accessible for many laboratories, including those with limited resources, as they do not require a spectrophotometer or other specialized equipment.
Subject(s)
Biofilms , Image Processing, Computer-Assisted , Software , Biofilms/growth & development , Image Processing, Computer-Assisted/methods , Gentian Violet , Bacteria/growth & development , Bacterial Adhesion , Gentamicins/pharmacologyABSTRACT
Morphometry is fundamental for studying and correlating neuronal morphology with brain functions. With increasing computational power, it is possible to extract morphometric characteristics automatically, including features such as length, volume, and number of neuron branches. However, to the best of our knowledge, there is no mapping of morphometric tools yet. In this context, we conducted a systematic search and review to identify and analyze tools within the scope of neuron analysis. Thus, the work followed a well-defined protocol and sought to answer the following research questions: What open-source tools are available for neuronal morphometric analysis? What morphometric characteristics are extracted by these tools? For this, aiming for greater robustness and coverage, the study was based on the paper analysis as well as the study of documentation and tests with the tools available in repositories. We analyzed 1,586 papers and mapped 23 tools, where NeuroM, L-Measure, and NeuroMorphoVis extract the most features. Furthermore, we contribute to the body of knowledge with the unprecedented presentation of 150 unique morphometric features whose terminologies were categorized and standardized. Overall, the study contributes to advancing the understanding of the complex mechanisms underlying the brain.
Subject(s)
Neurons , Humans , Neurons/cytology , Animals , Brain/cytology , Computational Biology/methods , Computational Biology/trends , Software/trends , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/trendsABSTRACT
This article presents an unsupervised method for segmenting brain computed tomography scans. The proposed methodology involves image feature extraction and application of similarity and continuity constraints to generate segmentation maps of the anatomical head structures. Specifically designed for real-world datasets, this approach applies a spatial continuity scoring function tailored to the desired number of structures. The primary objective is to assist medical experts in diagnosis by identifying regions with specific abnormalities. Results indicate a simplified and accessible solution, reducing computational effort, training time, and financial costs. Moreover, the method presents potential for expediting the interpretation of abnormal scans, thereby impacting clinical practice. This proposed approach might serve as a practical tool for segmenting brain computed tomography scans, and make a significant contribution to the analysis of medical images in both research and clinical settings.
Subject(s)
Brain , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Algorithms , Unsupervised Machine LearningABSTRACT
BACKGROUND: Dengue, Zika, and chikungunya, whose viruses are transmitted mainly by Aedes aegypti, significantly impact human health worldwide. Despite the recent development of promising vaccines against the dengue virus, controlling these arbovirus diseases still depends on mosquito surveillance and control. Nonetheless, several studies have shown that these measures are not sufficiently effective or ineffective. Identifying higher-risk areas in a municipality and directing control efforts towards them could improve it. One tool for this is the premise condition index (PCI); however, its measure requires visiting all buildings. We propose a novel approach capable of predicting the PCI based on facade street-level images, which we call PCINet. METHODOLOGY: Our study was conducted in Campinas, a one million-inhabitant city in São Paulo, Brazil. We surveyed 200 blocks, visited their buildings, and measured the three traditional PCI components (building and backyard conditions and shading), the facade conditions (taking pictures of them), and other characteristics. We trained a deep neural network with the pictures taken, creating a computational model that can predict buildings' conditions based on the view of their facades. We evaluated PCINet in a scenario emulating a real large-scale situation, where the model could be deployed to automatically monitor four regions of Campinas to identify risk areas. PRINCIPAL FINDINGS: PCINet produced reasonable results in differentiating the facade condition into three levels, and it is a scalable strategy to triage large areas. The entire process can be automated through data collection from facade data sources and inferences through PCINet. The facade conditions correlated highly with the building and backyard conditions and reasonably well with shading and backyard conditions. The use of street-level images and PCINet could help to optimize Ae. aegypti surveillance and control, reducing the number of in-person visits necessary to identify buildings, blocks, and neighborhoods at higher risk from mosquito and arbovirus diseases.
Subject(s)
Aedes , Dengue , Mosquito Vectors , Aedes/virology , Aedes/physiology , Animals , Brazil/epidemiology , Humans , Mosquito Vectors/virology , Mosquito Vectors/physiology , Dengue/prevention & control , Dengue/epidemiology , Dengue/transmission , Cities , Mosquito Control/methods , Image Processing, Computer-Assisted/methods , Zika Virus Infection/prevention & control , Zika Virus Infection/epidemiology , Zika Virus Infection/transmissionABSTRACT
Three-dimensional structured illumination microscopy (3D-SIM) and fluorescence in situ hybridization on three-dimensional preserved cells (3D-FISH) have proven to be robust and efficient methodologies for analyzing nuclear architecture and profiling the genome's topological features. These methods have allowed the simultaneous visualization and evaluation of several target structures at super-resolution. In this chapter, we focus on the application of 3D-SIM for the visualization of 3D-FISH preparations of chromosomes in interphase, known as Chromosome Territories (CTs). We provide a workflow and detailed guidelines for sample preparation, image acquisition, and image analysis to obtain quantitative measurements for profiling chromosome topological features. In parallel, we address a practical example of these protocols in the profiling of CTs 9 and 22 involved in the translocation t(9;22) in Chronic Myeloid Leukemia (CML). The profiling of chromosome topological features described in this chapter allowed us to characterize a large-scale topological disruption of CTs 9 and 22 that correlates directly with patients' response to treatment and as a possible potential change in the inheritance systems. These findings open new insights into how the genome structure is associated with the response to cancer treatments, highlighting the importance of microscopy in analyzing the topological features of the genome.
Subject(s)
Imaging, Three-Dimensional , In Situ Hybridization, Fluorescence , Humans , In Situ Hybridization, Fluorescence/methods , Imaging, Three-Dimensional/methods , Translocation, Genetic , Chromosomes/genetics , Leukemia, Myelogenous, Chronic, BCR-ABL Positive/genetics , Leukemia, Myelogenous, Chronic, BCR-ABL Positive/pathology , Interphase/genetics , Chromosomes, Human/genetics , Image Processing, Computer-Assisted/methodsABSTRACT
Several studies have aimed at identifying biomarkers in the initial phases of Alzheimer's disease (AD). Conversely, texture features, such as those from gray-level co-occurrence matrices (GLCMs), have highlighted important information from several types of medical images. More recently, texture-based brain networks have been shown to provide useful information in characterizing healthy individuals. However, no studies have yet explored the use of this type of network in the context of AD. This work aimed to employ texture brain networks to investigate the distinction between groups of patients with amnestic mild cognitive impairment (aMCI) and mild dementia due to AD, and a group of healthy subjects. Magnetic resonance (MR) images from the three groups acquired at two instances were used. Images were segmented and GLCM texture parameters were calculated for each region. Structural brain networks were generated using regions as nodes and the similarity among texture parameters as links, and graph theory was used to compute five network measures. An ANCOVA was performed for each network measure to assess statistical differences between groups. The thalamus showed significant differences between aMCI and AD patients for four network measures for the right hemisphere and one network measure for the left hemisphere. There were also significant differences between controls and AD patients for the left hippocampus, right superior parietal lobule, and right thalamus-one network measure each. These findings represent changes in the texture of these regions which can be associated with the cortical volume and thickness atrophies reported in the literature for AD. The texture networks showed potential to differentiate between aMCI and AD patients, as well as between controls and AD patients, offering a new tool to help understand these conditions and eventually aid early intervention and personalized treatment, thereby improving patient outcomes and advancing AD research.
Subject(s)
Alzheimer Disease , Brain , Cognitive Dysfunction , Magnetic Resonance Imaging , Humans , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/physiopathology , Cognitive Dysfunction/pathology , Magnetic Resonance Imaging/methods , Male , Female , Aged , Brain/diagnostic imaging , Brain/pathology , Middle Aged , Nerve Net/diagnostic imaging , Nerve Net/pathology , Nerve Net/physiopathology , Aged, 80 and over , Image Processing, Computer-Assisted/methodsABSTRACT
Optical microscopy videos enable experts to analyze the motion of several biological elements. Particularly in blood samples infected with Trypanosoma cruzi (T. cruzi), microscopy videos reveal a dynamic scenario where the parasites' motions are conspicuous. While parasites have self-motion, cells are inert and may assume some displacement under dynamic events, such as fluids and microscope focus adjustments. This paper analyzes the trajectory of T. cruzi and blood cells to discriminate between these elements by identifying the following motion patterns: collateral, fluctuating, and pan-tilt-zoom (PTZ). We consider two approaches: i) classification experiments for discrimination between parasites and cells; and ii) clustering experiments to identify the cell motion. We propose the trajectory step dispersion (TSD) descriptor based on standard deviation to characterize these elements, outperforming state-of-the-art descriptors. Our results confirm motion is valuable in discriminating T. cruzi of the cells. Since the parasites perform the collateral motion, their trajectory steps tend to randomness. The cells may assume fluctuating motion following a homogeneous and directional path or PTZ motion with trajectory steps in a restricted area. Thus, our findings may contribute to developing new computational tools focused on trajectory analysis, which can advance the study and medical diagnosis of Chagas disease.