Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.593
Filtrar
1.
J Neurooncol ; 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38958849

RESUMO

PURPOSE: Artificial Intelligence (AI) has become increasingly integrated clinically within neurosurgical oncology. This report reviews the cutting-edge technologies impacting tumor treatment and outcomes. METHODS: A rigorous literature search was performed with the aid of a research librarian to identify key articles referencing AI and related topics (machine learning (ML), computer vision (CV), augmented reality (AR), virtual reality (VR), etc.) for neurosurgical care of brain or spinal tumors. RESULTS: Treatment of central nervous system (CNS) tumors is being improved through advances across AI-such as AL, CV, and AR/VR. AI aided diagnostic and prognostication tools can influence pre-operative patient experience, while automated tumor segmentation and total resection predictions aid surgical planning. Novel intra-operative tools can rapidly provide histopathologic tumor classification to streamline treatment strategies. Post-operative video analysis, paired with rich surgical simulations, can enhance training feedback and regimens. CONCLUSION: While limited generalizability, bias, and patient data security are current concerns, the advent of federated learning, along with growing data consortiums, provides an avenue for increasingly safe, powerful, and effective AI platforms in the future.

2.
Front Artif Intell ; 7: 1386753, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38952408

RESUMO

Introduction: Computerized sentiment detection, based on artificial intelligence and computer vision, has become essential in recent years. Thanks to developments in deep neural networks, this technology can now account for environmental, social, and cultural factors, as well as facial expressions. We aim to create more empathetic systems for various purposes, from medicine to interpreting emotional interactions on social media. Methods: To develop this technology, we combined authentic images from various databases, including EMOTIC (ADE20K, MSCOCO), EMODB_SMALL, and FRAMESDB, to train our models. We developed two sophisticated algorithms based on deep learning techniques, DCNN and VGG19. By optimizing the hyperparameters of our models, we analyze context and body language to improve our understanding of human emotions in images. We merge the 26 discrete emotional categories with the three continuous emotional dimensions to identify emotions in context. The proposed pipeline is completed by fusing our models. Results: We adjusted the parameters to outperform previous methods in capturing various emotions in different contexts. Our study showed that the Sentiment_recognition_model and VGG19_contexte increased mAP by 42.81% and 44.12%, respectively, surpassing the results of previous studies. Discussion: This groundbreaking research could significantly improve contextual emotion recognition in images. The implications of these promising results are far-reaching, extending to diverse fields such as social robotics, affective computing, human-machine interaction, and human-robot communication.

3.
Heliyon ; 10(11): e32297, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38947432

RESUMO

The authentication process involves all the supply chain stakeholders, and it is also adopted to verify food quality and safety. Food authentication tools are an essential part of traceability systems as they provide information on the credibility of origin, species/variety identity, geographical provenance, production entity. Moreover, these systems are useful to evaluate the effect of transformation processes, conservation strategies and the reliability of packaging and distribution flows on food quality and safety. In this manuscript, we identified the innovative characteristics of food authentication systems to respond to market challenges, such as the simplification, the high sensitivity, and the non-destructive ability during authentication procedures. We also discussed the potential of the current identification systems based on molecular markers (chemical, biochemical, genetic) and the effectiveness of new technologies with reference to the miniaturized systems offered by nanotechnologies, and computer vision systems linked to artificial intelligence processes. This overview emphasizes the importance of convergent technologies in food authentication, to support molecular markers with the technological innovation offered by emerging technologies derived from biotechnologies and informatics. The potential of these strategies was evaluated on real examples of high-value food products. Technological innovation can therefore strengthen the system of molecular markers to meet the current market needs; however, food production processes are in profound evolution. The food 3D-printing and the introduction of new raw materials open new challenges for food authentication and this will require both an update of the current regulatory framework, as well as the development and adoption of new analytical systems.

4.
Open Res Eur ; 4: 43, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38957297

RESUMO

Background: This article introduces an innovative classification methodology to identify nanowires within scanning electron microscope images. Methods: Our approach employs advanced image manipulation techniques in conjunction with machine learning-based recognition algorithms. The effectiveness of our proposed method is demonstrated through its application to the categorization of scanning electron microscopy images depicting nanowires arrays. Results: The method's capability to isolate and distinguish individual nanowires within an array is the primary factor in the observed accuracy. The foundational data set for model training comprises scanning electron microscopy images featuring 240 III-V nanowire arrays grown with metal organic chemical vapor deposition on silicon substrates. Each of these arrays consists of 66 nanowires. The results underscore the model's proficiency in discerning distinct wire configurations and detecting parasitic crystals. Our approach yields an average F1 score of 0.91, indicating high precision and recall. Conclusions: Such a high level of performance and accuracy of ML methods demonstrate the viability of our technique not only for academic but also for practical commercial implementation and usage.

5.
Data Brief ; 54: 110279, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38962207

RESUMO

The LUMINA (Linguistic Unified Multimodal Indonesian Natural Audio-Visual) Dataset is a carefully curated constrained audio-visual dataset designed to support research in the field of speech perception. Spoken exclusively in Indonesian, LUMINA contains high-quality audio-visual recordings featuring 14 native speakers, including 9 males and 5 females. Each speaker contributes approximately 1,000 sentences, producing a rich and diverse data collection. The recorded videos focus on facial recordings, capturing essential visual cues and expressions that accompany speech. This extensive dataset provides a valuable resource for understanding how humans perceive and process spoken language, paving the way for speech recognition and synthesis technology advancements.

6.
Artigo em Inglês | MEDLINE | ID: mdl-38978825

RESUMO

Background: The American Optometric Association defines computer vision syndrome (CVS), also known as digital eye strain, as "a group of eye- and vision-related problems that result from prolonged computer, tablet, e-reader and cell phone use". We aimed to create a well-structured, valid, and reliable questionnaire to determine the prevalence of CVS, and to analyze the visual, ocular surface, and extraocular sequelae of CVS using a novel and smart self-assessment questionnaire. Methods: This multicenter, observational, cross-sectional, descriptive, survey-based, online study included 6853 complete online responses of medical students from 15 universities. All participants responded to the updated, online, fourth version of the CVS questionnaire (CVS-F4), which has high validity and reliability. CVS was diagnosed according to five basic diagnostic criteria (5DC) derived from the CVS-F4. Respondents who fulfilled the 5DC were considered CVS cases. The 5DC were then converted into a novel five-question self-assessment questionnaire designated as the CVS-Smart. Results: Of 10 000 invited medical students, 8006 responded to the CVS-F4 survey (80% response rate), while 6853 of the 8006 respondents provided complete online responses (85.6% completion rate). The overall CVS prevalence was 58.78% (n = 4028) among the study respondents; CVS prevalence was higher among women (65.87%) than among men (48.06%). Within the CVS group, the most common visual, ocular surface, and extraocular complaints were eye strain, dry eye, and neck/shoulder/back pain in 74.50% (n = 3001), 58.27% (n = 2347), and 80.52% (n = 3244) of CVS cases, respectively. Notably, 75.92% (3058/4028) of CVS cases were involved in the Mandated Computer System Use Program. Multivariate logistic regression analysis revealed that the two most statistically significant diagnostic criteria of the 5DC were ≥2 symptoms/attacks per month over the last 12 months (odds ratio [OR] = 204177.2; P <0.0001) and symptoms/attacks associated with screen use (OR = 16047.34; P <0.0001). The CVS-Smart demonstrated a Cronbach's alpha reliability coefficient of 0.860, Guttman split-half coefficient of 0.805, with perfect content and construct validity. A CVS-Smart score of 7-10 points indicated the presence of CVS. Conclusions: The visual, ocular surface, and extraocular diagnostic criteria for CVS constituted the basic components of CVS-Smart. CVS-Smart is a novel, valid, reliable, subjective instrument for determining CVS diagnosis and prevalence and may provide a tool for rapid periodic assessment and prognostication. Individuals with positive CVS-Smart results should consider modifying their lifestyles and screen styles and seeking the help of ophthalmologists and/or optometrists. Higher institutional authorities should consider revising the Mandated Computer System Use Program to avoid the long-term consequences of CVS among university students. Further research must compare CVS-Smart with other available metrics for CVS, such as the CVS questionnaire, to determine its test-retest reliability and to justify its widespread use.

7.
Artigo em Inglês | MEDLINE | ID: mdl-38978826

RESUMO

Background: Vascular endothelial growth factor (VEGF) is the primary substance involved in retinal barrier breach. VEGF overexpression may cause diabetic macular edema (DME). Laser photocoagulation of the macula is the standard treatment for DME; however, recently, intravitreal anti-VEGF injections have surpassed laser treatment. Our aim was to evaluate the efficacy of intravitreal injections of aflibercept or ranibizumab for managing treatment-naive DME. Methods: This single-center, retrospective, interventional, comparative study included eyes with visual impairment due to treatment-naive DME that underwent intravitreal injection of either aflibercept 2 mg/0.05 mL or ranibizumab 0.5 mg/0.05 mL at Al-Azhar University Hospitals, Egypt between March 2023 and January 2024. Demographic data and full ophthalmological examination results at baseline and 1, 3, and 6 months post-injection were collected, including the best-corrected distance visual acuity (BCDVA) in logarithm of the minimum angle of resolution (logMAR) notation, slit-lamp biomicroscopy, dilated fundoscopy, and central subfield thickness (CST) measured using spectral-domain optical coherence tomography. Results: Overall, the 96 eyes of 96 patients with a median (interquartile range [IQR]) age of 57 (10) (range: 20-74) years and a male-to-female ratio of 1:2.7 were allocated to one of two groups with comparable age, sex, diabetes mellitus duration, and presence of other comorbidities (all P >0.05). There was no statistically significant difference in baseline diabetic retinopathy status or DME type between groups (both P >0.05). In both groups, the median (IQR) BCDVA significantly improved from 0.7 (0.8) logMAR at baseline to 0.4 (0.1) logMAR at 6 months post-injection (both P = 0.001), with no statistically significant difference between groups at all follow-up visits (all P >0.05). The median (IQR) CST significantly decreased in the aflibercept group from 347 (166) µm at baseline to 180 (233) µm at 6 months post-injection, and it decreased in the ranibizumab group from 360 (180) µm at baseline to 190 (224) µm at 6 months post-injection (both P = 0.001), with no statistically significant differences between groups at all follow-up visits (all P >0.05). No serious adverse effects were documented in either group. Conclusions: Ranibizumab and aflibercept were equally effective in achieving the desired anatomical and functional results in patients with treatment-naïve DME in short-term follow-up without significant differences in injection counts between both drugs. Larger prospective, randomized, double-blinded trials with longer follow-up periods are needed to confirm our preliminary results.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38978827

RESUMO

Background: Diabetic retinopathy (DR), a sight-threatening ocular complication of diabetes mellitus, is one of the main causes of blindness in the working-age population. Dyslipidemia is a potential risk factor for the development or worsening of DR, with conflicting evidence in epidemiological studies. Fenofibrate, an antihyperlipidemic agent, has lipid-modifying and pleiotropic (non-lipid) effects that may lessen the incidence of microvascular events. Methods: Relevant studies were identified through a PubMed/MEDLINE search spanning the last 20 years, using the broad term "diabetic retinopathy" and specific terms "fenofibrate" and "dyslipidemia". References cited in these studies were further examined to compile this mini-review. These pivotal investigations underwent meticulous scrutiny and synthesis, focusing on methodological approaches and clinical outcomes. Furthermore, we provided the main findings of the seminal studies in a table to enhance comprehension and comparison. Results: Growing evidence indicates that fenofibrate treatment slows DR advancement owing to its possible protective effects on the blood-retinal barrier. The protective attributes of fenofibrate against DR progression and development can be broadly classified into two categories: lipid-modifying effects and non-lipid-related (pleiotropic) effects. The lipid-modifying effect is mediated through peroxisome proliferator-activated receptor-α activation, while the pleiotropic effects involve the reduction in serum levels of C-reactive protein, fibrinogen, and pro-inflammatory markers, and improvement in flow-mediated dilatation. In patients with DR, the lipid-modifying effects of fenofibrate primarily involve a reduction in lipoprotein-associated phospholipase A2 levels and the upregulation of apolipoprotein A1 levels. These changes contribute to the anti-inflammatory and anti-angiogenic effects of fenofibrate. Fenofibrate elicits a diverse array of pleiotropic effects, including anti-apoptotic, antioxidant, anti-inflammatory, and anti-angiogenic properties, along with the indirect consequences of these effects. Two randomized controlled trials-the Fenofibrate Intervention and Event Lowering in Diabetes and Action to Control Cardiovascular Risk in Diabetes studies-noted that fenofibrate treatment protected against DR progression, independent of serum lipid levels. Conclusions: Fenofibrate, an oral antihyperlipidemic agent that is effective in decreasing DR progression, may reduce the number of patients who develop vision-threatening complications and require invasive treatment. Despite its proven protection against DR progression, fenofibrate treatment has not yet gained wide clinical acceptance in DR management. Ongoing and future clinical trials may clarify the role of fenofibrate treatment in DR management.

9.
Artigo em Inglês | MEDLINE | ID: mdl-38981809

RESUMO

This article discusses the role of computer vision in otolaryngology, particularly through endoscopy and surgery. It covers recent applications of artificial intelligence (AI) in nonradiologic imaging within otolaryngology, noting the benefits and challenges, such as improving diagnostic accuracy and optimizing therapeutic outcomes, while also pointing out the necessity for enhanced data curation and standardized research methodologies to advance clinical applications. Technical aspects are also covered, providing a detailed view of the progression from manual feature extraction to more complex AI models, including convolutional neural networks and vision transformers and their potential application in clinical settings.

11.
Heliyon ; 10(12): e33039, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-38988532

RESUMO

Objective: The aim of this study was to evaluate the impact of the COVID-19 pandemic on ocular health related to digital device usage among university students in Lebanon. Design: A cross-sectional design was utilized to examine the association between the pandemic and ocular health. Participants: A total of 255 university students in Lebanon participated in the study, selected based on their enrollment during the pandemic. Methods: An online survey assessed participants' digital device usage, awareness of digital eye strain, and experienced symptoms. The study addressed the relationship between symptom frequency and screen time, especially in their connection to the pandemic and online learning. Results: Prior to the pandemic, the majority of participants (73.0 %) were unaware of digital eye strain. Following the transition to online learning, nearly half of the participants (47.0 %) reported using digital devices for 12 or more hours. The majority (92.0 %) experienced a substantial increase in daily digital device usage for learning, with an average increase of 3-5 h. Symptoms of digital eye strain, including headache, burning of eyes, blurry vision, sensitivity to light, worsening of vision and dryness of the eyes intensified in both frequency and severity during the pandemic and online learning period. Conclusions: The study emphasizes the importance of promoting healthy habits and implementing preventive measures to reduce the prevalence of digital eye strain symptoms among university students. Healthcare professionals and public health authorities should educate individuals on strategies to alleviate digital eye strain, considering the persistent reliance on digital devices beyond the pandemic.

12.
J Neurol Sci ; 463: 123089, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38991323

RESUMO

BACKGROUND: The core clinical sign of Parkinson's disease (PD) is bradykinesia, for which a standard test is finger tapping: the clinician observes a person repetitively tap finger and thumb together. That requires an expert eye, a scarce resource, and even experts show variability and inaccuracy. Existing applications of technology to finger tapping reduce the tapping signal to one-dimensional measures, with researcher-defined features derived from those measures. OBJECTIVES: (1) To apply a deep learning neural network directly to video of finger tapping, without human-defined measures/features, and determine classification accuracy for idiopathic PD versus controls. (2) To visualise the features learned by the model. METHODS: 152 smartphone videos of 10s finger tapping were collected from 40 people with PD and 37 controls. We down-sampled pixel dimensions and videos were split into 1 s clips. A 3D convolutional neural network was trained on these clips. RESULTS: For discriminating PD from controls, our model showed training accuracy 0.91, and test accuracy 0.69, with test precision 0.73, test recall 0.76 and test AUROC 0.76. We also report class activation maps for the five most predictive features. These show the spatial and temporal sections of video upon which the network focuses attention to make a prediction, including an apparent dropping thumb movement distinct for the PD group. CONCLUSIONS: A deep learning neural network can be applied directly to standard video of finger tapping, to distinguish PD from controls, without a requirement to extract a one-dimensional signal from the video, or pre-define tapping features.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38992406

RESUMO

Artificial intelligence (AI) refers to computer-based methodologies which use data to teach a computer to solve pre-defined tasks; these methods can be applied to identify patterns in large multi-modal data sources. AI applications in inflammatory bowel disease (IBD) includes predicting response to therapy, disease activity scoring of endoscopy, drug discovery, and identifying bowel damage in images. As a complex disease with entangled relationships between genomics, metabolomics, microbiome, and the environment, IBD stands to benefit greatly from methodologies that can handle this complexity. We describe current applications, critical challenges, and propose future directions of AI in IBD.

14.
Front Transplant ; 3: 1305468, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38993786

RESUMO

Two common obstacles limiting the performance of data-driven algorithms in digital histopathology classification tasks are the lack of expert annotations and the narrow diversity of datasets. Multi-instance learning (MIL) can address the former challenge for the analysis of whole slide images (WSI), but performance is often inferior to full supervision. We show that the inclusion of weak annotations can significantly enhance the effectiveness of MIL while keeping the approach scalable. An analysis framework was developed to process periodic acid-Schiff (PAS) and Sirius Red (SR) slides of renal biopsies. The workflow segments tissues into coarse tissue classes. Handcrafted and deep features were extracted from these tissues and combined using a soft attention model to predict several slide-level labels: delayed graft function (DGF), acute tubular injury (ATI), and Remuzzi grade components. A tissue segmentation quality metric was also developed to reduce the adverse impact of poorly segmented instances. The soft attention model was trained using 5-fold cross-validation on a mixed dataset and tested on the QUOD dataset containing n = 373 PAS and n = 195 SR biopsies. The average ROC-AUC over different prediction tasks was found to be 0.598 ± 0.011 , significantly higher than using only ResNet50 ( 0.545 ± 0.012 ), only handcrafted features ( 0.542 ± 0.011 ), and the baseline ( 0.532 ± 0.012 ) of state-of-the-art performance. In conjunction with soft attention, weighting tissues by segmentation quality has led to further improvement ( A U C = 0.618 ± 0.010 ) . Using an intuitive visualisation scheme, we show that our approach may also be used to support clinical decision making as it allows pinpointing individual tissues relevant to the predictions.

15.
Sensors (Basel) ; 24(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39000823

RESUMO

Unmanned aerial vehicle (UAV)-based object detection methods are widely used in traffic detection due to their high flexibility and extensive coverage. In recent years, with the increasing complexity of the urban road environment, UAV object detection algorithms based on deep learning have gradually become a research hotspot. However, how to further improve algorithmic efficiency in response to the numerous and rapidly changing road elements, and thus achieve high-speed and accurate road object detection, remains a challenging issue. Given this context, this paper proposes the high-efficiency multi-object detection algorithm for UAVs (HeMoDU). HeMoDU reconstructs a state-of-the-art, deep-learning-based object detection model and optimizes several aspects to improve computational efficiency and detection accuracy. To validate the performance of HeMoDU in urban road environments, this paper uses the public urban road datasets VisDrone2019 and UA-DETRAC for evaluation. The experimental results show that the HeMoDU model effectively improves the speed and accuracy of UAV object detection.

16.
Sensors (Basel) ; 24(13)2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-39000900

RESUMO

In recent years, the technological landscape has undergone a profound metamorphosis catalyzed by the widespread integration of drones across diverse sectors. Essential to the drone manufacturing process is comprehensive testing, typically conducted in controlled laboratory settings to uphold safety and privacy standards. However, a formidable challenge emerges due to the inherent limitations of GPS signals within indoor environments, posing a threat to the accuracy of drone positioning. This limitation not only jeopardizes testing validity but also introduces instability and inaccuracies, compromising the assessment of drone performance. Given the pivotal role of precise GPS-derived data in drone autopilots, addressing this indoor-based GPS constraint is imperative to ensure the reliability and resilience of unmanned aerial vehicles (UAVs). This paper delves into the implementation of an Indoor Positioning System (IPS) leveraging computer vision. The proposed system endeavors to detect and localize UAVs within indoor environments through an enhanced vision-based triangulation approach. A comparative analysis with alternative positioning methodologies is undertaken to ascertain the efficacy of the proposed system. The results obtained showcase the efficiency and precision of the designed system in detecting and localizing various types of UAVs, underscoring its potential to advance the field of indoor drone navigation and testing.

17.
Sensors (Basel) ; 24(13)2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-39000914

RESUMO

The acquisition of the body temperature of animals kept in captivity in biology laboratories is crucial for several studies in the field of animal biology. Traditionally, the acquisition process was carried out manually, which does not guarantee much accuracy or consistency in the acquired data and was painful for the animal. The process was then switched to a semi-manual process using a thermal camera, but it still involved manually clicking on each part of the animal's body every 20 s of the video to obtain temperature values, making it a time-consuming, non-automatic, and difficult process. This project aims to automate this acquisition process through the automatic recognition of parts of a lizard's body, reading the temperature in these parts based on a video taken with two cameras simultaneously: an RGB camera and a thermal camera. The first camera detects the location of the lizard's various body parts using artificial intelligence techniques, and the second camera allows reading of the respective temperature of each part. Due to the lack of lizard datasets, either in the biology laboratory or online, a dataset had to be created from scratch, containing the identification of the lizard and six of its body parts. YOLOv5 was used to detect the lizard and its body parts in RGB images, achieving a precision of 90.00% and a recall of 98.80%. After initial calibration, the RGB and thermal camera images are properly localised, making it possible to know the lizard's position, even when the lizard is at the same temperature as its surrounding environment, through a coordinate conversion from the RGB image to the thermal image. The thermal image has a colour temperature scale with the respective maximum and minimum temperature values, which is used to read each pixel of the thermal image, thus allowing the correct temperature to be read in each part of the lizard.


Assuntos
Inteligência Artificial , Temperatura Corporal , Lagartos , Animais , Lagartos/fisiologia , Temperatura Corporal/fisiologia , Gravação em Vídeo/métodos , Processamento de Imagem Assistida por Computador/métodos
18.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-39001127

RESUMO

Compressive sensing (CS) is recognized for its adeptness at compressing signals, making it a pivotal technology in the context of sensor data acquisition. With the proliferation of image data in Internet of Things (IoT) systems, CS is expected to reduce the transmission cost of signals captured by various sensor devices. However, the quality of CS-reconstructed signals inevitably degrades as the sampling rate decreases, which poses a challenge in terms of the inference accuracy in downstream computer vision (CV) tasks. This limitation imposes an obstacle to the real-world application of existing CS techniques, especially for reducing transmission costs in sensor-rich environments. In response to this challenge, this paper contributes a CV-oriented adaptive CS framework based on saliency detection to the field of sensing technology that enables sensor systems to intelligently prioritize and transmit the most relevant data. Unlike existing CS techniques, the proposal prioritizes the accuracy of reconstructed images for CV purposes, not only for visual quality. The primary objective of this proposal is to enhance the preservation of information critical for CV tasks while optimizing the utilization of sensor data. This work conducts experiments on various realistic scenario datasets collected by real sensor devices. Experimental results demonstrate superior performance compared to existing CS sampling techniques across the STL10, Intel, and Imagenette datasets for classification and KITTI for object detection. Compared with the baseline uniform sampling technique, the average classification accuracy shows a maximum improvement of 26.23%, 11.69%, and 18.25%, respectively, at specific sampling rates. In addition, even at very low sampling rates, the proposal is demonstrated to be robust in terms of classification and detection as compared to state-of-the-art CS techniques. This ensures essential information for CV tasks is retained, improving the efficacy of sensor-based data acquisition systems.

19.
Sensors (Basel) ; 24(13)2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-39001152

RESUMO

The search for structural and microstructural defects using simple human vision is associated with significant errors in determining voids, large pores, and violations of the integrity and compactness of particle packing in the micro- and macrostructure of concrete. Computer vision methods, in particular convolutional neural networks, have proven to be reliable tools for the automatic detection of defects during visual inspection of building structures. The study's objective is to create and compare computer vision algorithms that use convolutional neural networks to identify and analyze damaged sections in concrete samples from different structures. Networks of the following architectures were selected for operation: U-Net, LinkNet, and PSPNet. The analyzed images are photos of concrete samples obtained by laboratory tests to assess the quality in terms of the defection of the integrity and compactness of the structure. During the implementation process, changes in quality metrics such as macro-averaged precision, recall, and F1-score, as well as IoU (Jaccard coefficient) and accuracy, were monitored. The best metrics were demonstrated by the U-Net model, supplemented by the cellular automaton algorithm: precision = 0.91, recall = 0.90, F1 = 0.91, IoU = 0.84, and accuracy = 0.90. The developed segmentation algorithms are universal and show a high quality in highlighting areas of interest under any shooting conditions and different volumes of defective zones, regardless of their localization. The automatization of the process of calculating the damage area and a recommendation in the "critical/uncritical" format can be used to assess the condition of concrete of various types of structures, adjust the formulation, and change the technological parameters of production.

20.
Photodiagnosis Photodyn Ther ; : 104277, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-39004111

RESUMO

BACKGROUND: This study aimed to investigate the choroidal vascularity index (CVI) in patients with computer vision syndrome (CVS) combined with accommodative lead. METHODS: This retrospective case-control study enrolled patients diagnosed with CVS and accommodative lead at the XXX Hospital affiliated to XXX University between July 2022 and May 2023. The control group included individuals without any ocular diseases. Ophthalmic assessments included basic visual acuity, refraction, ocular biometric parameters, and CVI. RESULTS: A total of 85 participants were included in the study, with 45 in the CVS group and 40 in the control group. The central corneal thickness of CVS group was found to be significantly thinner compared to the control group in both the right eye (532.40±30.93 vs. 545.78±19.99 µm, P=0.019) and left eye (533.96±29.57 vs. 547.56±20.39, P=0.014). In comparison to the control group, the CVS group exhibited lower CVI in the superior (0.40±0.08 vs. 0.43±0.09, P=0.001), temporal (0.40±0.08 vs. 0.44±0.10, P<0.001), inferior (0.41±0.08 vs. 0.46±0.08, P<0.001), and nasal (0.41±0.08 vs. 0.44±0.08, P=0.001) quadrants. Similar differences were observed in all four quadrants within the 1-3 mm radius, and in the temporal (P=0.004) and inferior (P=0.002) quadrants within the 1-6 mm and 3-6 mm radii (all P<0.05). CONCLUSION: Compared to individuals without ocular issues, patients with CVS and accommodative lead were found to have thinner corneal central thickness and lower CVI.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA