Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.535
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Plant Mol Biol ; 114(5): 92, 2024 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-39179745

RESUMO

Leaf rolling is a common adaptive response that plants have evolved to counteract the detrimental effects of various environmental stresses. Gaining insight into the mechanisms underlying leaf rolling alterations presents researchers with a unique opportunity to enhance stress tolerance in crops exhibiting leaf rolling, such as maize. In order to achieve a more profound understanding of leaf rolling, it is imperative to ascertain the occurrence and extent of this phenotype. While traditional manual leaf rolling detection is slow and laborious, research into high-throughput methods for detecting leaf rolling within our investigation scope remains limited. In this study, we present an approach for detecting leaf rolling in maize using the YOLOv8 model. Our method, LRD-YOLO, integrates two significant improvements: a Convolutional Block Attention Module to augment feature extraction capabilities, and a Deformable ConvNets v2 to enhance adaptability to changes in target shape and scale. Through experiments on a dataset encompassing severe occlusion, variations in leaf scale and shape, and complex background scenarios, our approach achieves an impressive mean average precision of 81.6%, surpassing current state-of-the-art methods. Furthermore, the LRD-YOLO model demands only 8.0 G floating point operations and the parameters of 3.48 M. We have proposed an innovative method for leaf rolling detection in maize, and experimental outcomes showcase the efficacy of LRD-YOLO in precisely detecting leaf rolling in complex scenarios while maintaining real-time inference speed.


Assuntos
Aprendizado Profundo , Folhas de Planta , Zea mays , Zea mays/fisiologia , Zea mays/genética , Folhas de Planta/fisiologia , Meio Ambiente
2.
Lab Invest ; : 102130, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39233013

RESUMO

In digital pathology, accurate mitosis detection in histopathological images is critical for cancer diagnosis and prognosis. However, this remains challenging due to the inherent variability in cell morphology and the domain shift problem. This study introduces CNMI-YOLO (ConvNext Mitosis Identification-YOLO), a new two-stage deep learning method that uses the YOLOv7 architecture for cell detection and the ConvNeXt architecture for cell classification. The goal is to improve the identification of mitosis in different types of cancer. We utilized the MIDOG 2022 dataset in the experiments to ensure the model's robustness and success across various scanners, species, and cancer types. The CNMI-YOLO model demonstrates superior performance in accurately detecting mitotic cells, significantly outperforming existing models in terms of precision, recall, and F1-score. The CNMI-YOLO model achieved an F1-score of 0.795 on the MIDOG 2022 and demonstrated robust generalization with F1-scores of 0.783 and 0.759 on the external melanoma and sarcoma test sets, respectively. Additionally, the study included ablation studies to evaluate various object detection and classification models, such as Faster R-CNN and Swin Transformer. Furthermore, we assessed the model's robustness performance on unseen data, confirming its ability to generalize and its potential for real-world use in digital pathology, using soft tissue sarcoma and melanoma samples not included in the training dataset.

3.
Mol Biol Evol ; 40(4)2023 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-36947126

RESUMO

Gene flow between previously differentiated populations during the founding of an admixed or hybrid population has the potential to introduce adaptive alleles into the new population. If the adaptive allele is common in one source population, but not the other, then as the adaptive allele rises in frequency in the admixed population, genetic ancestry from the source containing the adaptive allele will increase nearby as well. Patterns of genetic ancestry have therefore been used to identify post-admixture positive selection in humans and other animals, including examples in immunity, metabolism, and animal coloration. A common method identifies regions of the genome that have local ancestry "outliers" compared with the distribution across the rest of the genome, considering each locus independently. However, we lack theoretical models for expected distributions of ancestry under various demographic scenarios, resulting in potential false positives and false negatives. Further, ancestry patterns between distant sites are often not independent. As a result, current methods tend to infer wide genomic regions containing many genes as under selection, limiting biological interpretation. Instead, we develop a deep learning object detection method applied to images generated from local ancestry-painted genomes. This approach preserves information from the surrounding genomic context and avoids potential pitfalls of user-defined summary statistics. We find the method is robust to a variety of demographic misspecifications using simulated data. Applied to human genotype data from Cabo Verde, we localize a known adaptive locus to a single narrow region compared with multiple or long windows obtained using two other ancestry-based methods.


Assuntos
Genética Populacional , Genômica , Animais , Humanos , Genômica/métodos , Genótipo , Fluxo Gênico , Cromossomos
4.
Metab Eng ; 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39233197

RESUMO

There have been significant advances in literature mining, allowing for the extraction of target information from the literature. However, biological literature often includes biological pathway images that are difficult to extract in an easily editable format. To address this challenge, this study aims to develop a machine learning framework called the "Extraction of Biological Pathway Information" (EBPI). The framework automates the search for relevant publications, extracts biological pathway information from images within the literature, including genes, enzymes, and metabolites, and generates the output in a tabular format. For this, this framework determines the direction of biochemical reactions, and detects and classifies texts within biological pathway images. Performance of EBPI was evaluated by comparing the extracted pathway information with manually curated pathway maps. EBPI will be useful for extracting biological pathway information from the literature in a high-throughput manner, and can be used for pathway studies, including metabolic engineering.

5.
J Exp Bot ; 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38716775

RESUMO

Plant physiology and metabolism relies on the function of stomata, structures on the surface of above ground organs, which facilitate the exchange of gases with the atmosphere. The morphology of the guard cells and corresponding pore which make up the stomata, as well as the density (number per unit area) are critical in determining overall gas exchange capacity. These characteristics can be quantified visually from images captured using microscopes, traditionally relying on time-consuming manual analysis. However, deep learning (DL) models provide a promising route to increase the throughput and accuracy of plant phenotyping tasks, including stomatal analysis. Here we review the published literature on the application of DL for stomatal analysis. We discuss the variation in pipelines used; from data acquisition, pre-processing, DL architecture and output evaluation to post processing. We introduce the most common network structures, the plant species that have been studied, and the measurements that have been performed. Through this review, we hope to promote the use of DL methods for plant phenotyping tasks and highlight future requirements to optimise uptake; predominantly focusing on the sharing of datasets and generalisation of models as well as the caveats associated with utilising image data to infer physiological function.

6.
Liver Int ; 44(2): 330-343, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38014574

RESUMO

Metabolic dysfunction-associated fatty liver disease (MAFLD) has reached epidemic proportions worldwide and is the most frequent cause of chronic liver disease in developed countries. Within the spectrum of liver disease in MAFLD, steatohepatitis is a progressive form of liver disease and hepatocyte ballooning (HB) is a cardinal pathological feature of steatohepatitis. The accurate and reproducible diagnosis of HB is therefore critical for the early detection and treatment of steatohepatitis. Currently, a diagnosis of HB relies on pathological examination by expert pathologists, which may be a time-consuming and subjective process. Hence, there has been interest in developing automated methods for diagnosing HB. This narrative review briefly discusses the development of artificial intelligence (AI) technology for diagnosing fatty liver disease pathology over the last 30 years and provides an overview of the current research status of AI algorithms for the identification of HB, including published articles on traditional machine learning algorithms and deep learning algorithms. This narrative review also provides a summary of object detection algorithms, including the principles, historical developments, and applications in the medical image analysis. The potential benefits of object detection algorithms for HB diagnosis (specifically those combined with a transformer architecture) are discussed, along with the future directions of object detection algorithms in HB diagnosis and the potential applications of generative AI on transformer architecture in this field. In conclusion, object detection algorithms have huge potential for the identification of HB and could make the diagnosis of MAFLD more accurate and efficient in the near future.


Assuntos
Inteligência Artificial , Hepatopatia Gordurosa não Alcoólica , Humanos , Algoritmos , Tecnologia , Hepatócitos
7.
J Microsc ; 295(2): 93-101, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38532662

RESUMO

As microscopy diversifies and becomes ever more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certain challenges in turning microscopy images into answers, independent of their scientific question and the images they have generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by keeping analysis in mind, optimizing data quality, understanding tools and tradeoffs, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.

8.
J Microsc ; 2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38571482

RESUMO

Computational image analysis combined with label-free imaging has helped maintain its relevance for cell biology, despite the rapid technical improvements in fluorescence microscopy with the molecular specificity of tags. Here, we discuss some computational tools developed in our lab and their application to quantify cell shape, intracellular organelle movement and bead transport in vitro, using differential interference contrast (DIC) microscopy data as inputs. The focus of these methods is image filtering to enhance image gradients, and combining them with segmentation and single particle tracking (SPT). We demonstrate the application of these methods to Escherichia coli cell length estimation and tracking of densely packed lipid granules in Caenorhabditis elegans one-celled embryos, diffusing beads in solutions of different viscosities and kinesin-driven transport on microtubules. These approaches demonstrate how improvements to low-level image analysis methods can help obtain insights through quantitative cellular and subcellular microscopy.

9.
Int J Legal Med ; 138(2): 659-670, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37804333

RESUMO

The diagnosis of drowning is one of the most difficult tasks in forensic medicine. The diatom test is a complementary analysis method that may help the forensic pathologist in the diagnosis of drowning and the localization of the drowning site. This test consists in detecting or identifying diatoms, unicellular algae, in tissue and water samples. In order to observe diatoms under light microscopy, those samples may be digested by enzymes such as proteinase K. However, this digestion method may leave high amounts of debris, leading thus to a difficult detection and identification of diatoms. To the best of our knowledge, no model is proved to detect and identify accurately diatom species observed in highly complex backgrounds under light microscopy. Therefore, a novel method of model development for diatom detection and identification in a forensic context, based on sequential transfer learning of object detection models, is proposed in this article. The best resulting models are able to detect and identify up to 50 species of forensically relevant diatoms with an average precision and an average recall ranging from 0.7 to 1 depending on the concerned species. The models were developed by sequential transfer learning and globally outperformed those developed by traditional transfer learning. The best model of diatom species identification is expected to be used in routine at the Medicolegal Institute of Paris.


Assuntos
Diatomáceas , Afogamento , Humanos , Afogamento/diagnóstico , Pulmão , Medicina Legal/métodos , Microscopia
10.
Surg Endosc ; 38(6): 3461-3469, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38760565

RESUMO

BACKGROUND: Most intraoperative adverse events (iAEs) result from surgeons' errors, and bleeding is the majority of iAEs. Recognizing active bleeding timely is important to ensure safe surgery, and artificial intelligence (AI) has great potential for detecting active bleeding and providing real-time surgical support. This study aimed to develop a real-time AI model to detect active intraoperative bleeding. METHODS: We extracted 27 surgical videos from a nationwide multi-institutional surgical video database in Japan and divided them at the patient level into three sets: training (n = 21), validation (n = 3), and testing (n = 3). We subsequently extracted the bleeding scenes and labeled distinctively active bleeding and blood pooling frame by frame. We used pre-trained YOLOv7_6w and developed a model to learn both active bleeding and blood pooling. The Average Precision at an Intersection over Union threshold of 0.5 (AP.50) for active bleeding and frames per second (FPS) were quantified. In addition, we conducted two 5-point Likert scales (5 = Excellent, 4 = Good, 3 = Fair, 2 = Poor, and 1 = Fail) questionnaires about sensitivity (the sensitivity score) and number of overdetection areas (the overdetection score) to investigate the surgeons' assessment. RESULTS: We annotated 34,117 images of 254 bleeding events. The AP.50 for active bleeding in the developed model was 0.574 and the FPS was 48.5. Twenty surgeons answered two questionnaires, indicating a sensitivity score of 4.92 and an overdetection score of 4.62 for the model. CONCLUSIONS: We developed an AI model to detect active bleeding, achieving real-time processing speed. Our AI model can be used to provide real-time surgical support.


Assuntos
Inteligência Artificial , Colectomia , Laparoscopia , Humanos , Laparoscopia/efeitos adversos , Laparoscopia/métodos , Colectomia/métodos , Colectomia/efeitos adversos , Perda Sanguínea Cirúrgica/estatística & dados numéricos , Gravação em Vídeo , Japão , Complicações Intraoperatórias/diagnóstico , Complicações Intraoperatórias/etiologia
11.
Surg Endosc ; 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39138679

RESUMO

BACKGROUND: Postoperative hypoparathyroidism is a major complication of thyroidectomy, occurring when the parathyroid glands are inadvertently damaged during surgery. Although intraoperative images are rarely used to train artificial intelligence (AI) because of its complex nature, AI may be trained to intraoperatively detect parathyroid glands using various augmentation methods. The purpose of this study was to train an effective AI model to detect parathyroid glands during thyroidectomy. METHODS: Video clips of the parathyroid gland were collected during thyroid lobectomy procedures. Confirmed parathyroid images were used to train three types of datasets according to augmentation status: baseline, geometric transformation, and generative adversarial network-based image inpainting. The primary outcome was the average precision of the performance of AI in detecting parathyroid glands. RESULTS: 152 Fine-needle aspiration-confirmed parathyroid gland images were acquired from 150 patients who underwent unilateral lobectomy. The average precision of the AI model in detecting parathyroid glands based on baseline data was 77%. This performance was enhanced by applying both geometric transformation and image inpainting augmentation methods, with the geometric transformation data augmentation dataset showing a higher average precision (79%) than the image inpainting model (78.6%). When this model was subjected to external validation using a completely different thyroidectomy approach, the image inpainting method was more effective (46%) than both the geometric transformation (37%) and baseline (33%) methods. CONCLUSION: This AI model was found to be an effective and generalizable tool in the intraoperative identification of parathyroid glands during thyroidectomy, especially when aided by appropriate augmentation methods. Additional studies comparing model performance and surgeon identification, however, are needed to assess the true clinical relevance of this AI model.

12.
Phytopathology ; 114(7): 1490-1501, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38968142

RESUMO

Early detection of rice blast disease is pivotal to ensure rice yield. We collected in situ images of rice blast and constructed a rice blast dataset based on variations in lesion shape, size, and color. Given that rice blast lesions are small and typically exhibit round, oval, and fusiform shapes, we proposed a small object detection model named GCPDFFNet (global context-based parallel differentiation feature fusion network) for rice blast recognition. The GCPDFFNet model has three global context feature extraction modules and two parallel differentiation feature fusion modules. The global context modules are employed to focus on the lesion areas; the parallel differentiation feature fusion modules are used to enhance the recognition effect of small-sized lesions. In addition, we proposed the SCYLLA normalized Wasserstein distance loss function, specifically designed to accelerate model convergence and improve the detection accuracy of rice blast disease. Comparative experiments were conducted on the rice blast dataset to evaluate the performance of the model. The proposed GCPDFFNet model outperformed the baseline network CenterNet, with a significant increase in mean average precision from 83.6 to 95.4% on the rice blast test set while maintaining a satisfactory frames per second drop from 147.9 to 122.1. Our results suggest that the GCPDFFNet model can accurately detect in situ rice blast disease while ensuring the inference speed meets the real-time requirements.


Assuntos
Oryza , Doenças das Plantas , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
13.
Plant Cell Rep ; 43(5): 126, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38652181

RESUMO

KEY MESSAGE: Innovatively, we consider stomatal detection as rotated object detection and provide an end-to-end, batch, rotated, real-time stomatal density and aperture size intelligent detection and identification system, RotatedeStomataNet. Stomata acts as a pathway for air and water vapor in the course of respiration, transpiration, and other gas metabolism, so the stomata phenotype is important for plant growth and development. Intelligent detection of high-throughput stoma is a key issue. Nevertheless, currently available methods usually suffer from detection errors or cumbersome operations when facing densely and unevenly arranged stomata. The proposed RotatedStomataNet innovatively regards stomata detection as rotated object detection, enabling an end-to-end, real-time, and intelligent phenotype analysis of stomata and apertures. The system is constructed based on the Arabidopsis and maize stomatal data sets acquired destructively, and the maize stomatal data set acquired in a non-destructive way, enabling the one-stop automatic collection of phenotypic, such as the location, density, length, and width of stomata and apertures without step-by-step operations. The accuracy of this system to acquire stomata and apertures has been well demonstrated in monocotyledon and dicotyledon, such as Arabidopsis, soybean, wheat, and maize. The experimental results that the prediction results of the method are consistent with those of manual labeling. The test sets, the system code, and their usage are also given ( https://github.com/AITAhenu/RotatedStomataNet ).


Assuntos
Arabidopsis , Fenótipo , Estômatos de Plantas , Zea mays , Estômatos de Plantas/fisiologia , Zea mays/genética , Zea mays/fisiologia , Zea mays/crescimento & desenvolvimento , Arabidopsis/genética , Arabidopsis/fisiologia
14.
BMC Med Imaging ; 24(1): 152, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38890604

RESUMO

BACKGROUND: Leishmaniasis is a vector-born neglected parasitic disease belonging to the genus Leishmania. Out of the 30 Leishmania species, 21 species cause human infection that affect the skin and the internal organs. Around, 700,000 to 1,000,000 of the newly infected cases and 26,000 to 65,000 deaths are reported worldwide annually. The disease exhibits three clinical presentations, namely, the cutaneous, muco-cutaneous and visceral Leishmaniasis which affects the skin, mucosal membrane and the internal organs, respectively. The relapsing behavior of the disease limits its diagnosis and treatment efficiency. The common diagnostic approaches follow subjective, error-prone, repetitive processes. Despite, an ever pressing need for an accurate detection of Leishmaniasis, the research conducted so far is scarce. In this regard, the main aim of the current research is to develop an artificial intelligence based detection tool for the Leishmaniasis from the Geimsa-stained microscopic images using deep learning method. METHODS: Stained microscopic images were acquired locally and labeled by experts. The images were augmented using different methods to prevent overfitting and improve the generalizability of the system. Fine-tuned Faster RCNN, SSD, and YOLOV5 models were used for object detection. Mean average precision (MAP), precision, and Recall were calculated to evaluate and compare the performance of the models. RESULTS: The fine-tuned YOLOV5 outperformed the other models such as Faster RCNN and SSD, with the MAP scores, of 73%, 54% and 57%, respectively. CONCLUSION: The currently developed YOLOV5 model can be tested in the clinics to assist the laboratorists in diagnosing Leishmaniasis from the microscopic images. Particularly, in low-resourced healthcare facilities, with fewer qualified medical professionals or hematologists, our AI support system can assist in reducing the diagnosing time, workload, and misdiagnosis. Furthermore, the dataset collected by us will be shared with other researchers who seek to improve upon the detection system of the parasite. The current model detects the parasites even in the presence of the monocyte cells, but sometimes, the accuracy decreases due to the differences in the sizes of the parasite cells alongside the blood cells. The incorporation of cascaded networks in future and the quantification of the parasite load, shall overcome the limitations of the currently developed system.


Assuntos
Corantes Azur , Aprendizado Profundo , Microscopia , Humanos , Microscopia/métodos , Leishmaniose/diagnóstico por imagem , Leishmaniose/parasitologia , Leishmania/isolamento & purificação
15.
World J Surg Oncol ; 22(1): 2, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38167161

RESUMO

BACKGROUND: Breast ultrasound (US) is useful for dense breasts, and the introduction of artificial intelligence (AI)-assisted diagnoses of breast US images should be considered. However, the implementation of AI-based technologies in clinical practice is problematic because of the costs of introducing such approaches to hospital information systems (HISs) and the security risk of connecting HIS to the Internet to access AI services. To solve these problems, we developed a system that applies AI to the analysis of breast US images captured using a smartphone. METHODS: Training data were prepared using 115 images of benign lesions and 201 images of malignant lesions acquired at the Division of Breast Surgery, Gifu University Hospital. YOLOv3 (object detection models) was used to detect lesions on US images. A graphical user interface (GUI) was developed to predict an AI server. A smartphone application was also developed for capturing US images displayed on the HIS monitor with its camera and displaying the prediction results received from the AI server. The sensitivity and specificity of the prediction performed on the AI server and via the smartphone were calculated using 60 images spared from the training. RESULTS: The established AI showed 100% sensitivity and 75% specificity for malignant lesions and took 0.2 s per prediction with the AI sever. Prediction using a smartphone required 2 s per prediction and showed 100% sensitivity and 97.5% specificity for malignant lesions. CONCLUSIONS: Good-quality predictions were obtained using the AI server. Moreover, the quality of the prediction via the smartphone was slightly better than that on the AI server, which can be safely and inexpensively introduced into HISs.


Assuntos
Inteligência Artificial , Smartphone , Feminino , Humanos , Sensibilidade e Especificidade , Ultrassonografia Mamária
16.
BMC Med Inform Decis Mak ; 24(1): 126, 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38755563

RESUMO

BACKGROUND: Chest X-ray imaging based abnormality localization, essential in diagnosing various diseases, faces significant clinical challenges due to complex interpretations and the growing workload of radiologists. While recent advances in deep learning offer promising solutions, there is still a critical issue of domain inconsistency in cross-domain transfer learning, which hampers the efficiency and accuracy of diagnostic processes. This study aims to address the domain inconsistency problem and improve autonomic abnormality localization performance of heterogeneous chest X-ray image analysis, particularly in detecting abnormalities, by developing a self-supervised learning strategy called "BarlwoTwins-CXR". METHODS: We utilized two publicly available datasets: the NIH Chest X-ray Dataset and the VinDr-CXR. The BarlowTwins-CXR approach was conducted in a two-stage training process. Initially, self-supervised pre-training was performed using an adjusted Barlow Twins algorithm on the NIH dataset with a Resnet50 backbone pre-trained on ImageNet. This was followed by supervised fine-tuning on the VinDr-CXR dataset using Faster R-CNN with Feature Pyramid Network (FPN). The study employed mean Average Precision (mAP) at an Intersection over Union (IoU) of 50% and Area Under the Curve (AUC) for performance evaluation. RESULTS: Our experiments showed a significant improvement in model performance with BarlowTwins-CXR. The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models. In addition, the Ablation CAM method revealed enhanced precision in localizing chest abnormalities. The study involved 112,120 images from the NIH dataset and 18,000 images from the VinDr-CXR dataset, indicating robust training and testing samples. CONCLUSION: BarlowTwins-CXR significantly enhances the efficiency and accuracy of chest X-ray image-based abnormality localization, outperforming traditional transfer learning methods and effectively overcoming domain inconsistency in cross-domain scenarios. Our experiment results demonstrate the potential of using self-supervised learning to improve the generalizability of models in medical settings with limited amounts of heterogeneous data. This approach can be instrumental in aiding radiologists, particularly in high-workload environments, offering a promising direction for future AI-driven healthcare solutions.


Assuntos
Radiografia Torácica , Aprendizado de Máquina Supervisionado , Humanos , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Conjuntos de Dados como Assunto
17.
Int J Biometeorol ; 68(2): 305-316, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38036707

RESUMO

Winter tourism is an important economic factor in the European Alps, which could be exposed to severely changing meteorological conditions due to climate change in the future. The extent to which meteorology influences winter tourism figures has so far been analyzed mainly based on monthly or seasonal data and in relation to skier numbers. Therefore, we record for the first time daily visitor numbers at five Bavarian winter tourism destinations based on 1518 webcam images using object detection and link them to meteorological and time-related variables. Our results show that parameters such as temperature, cloud cover or sunshine duration, precipitation, snow depth, wind speed, and relative humidity play a role especially at locations that include other forms of winter tourism in addition to skiing. In the ski resorts studied, on the other hand, skiing is mostly independent of current weather conditions, which can be attributed mainly to artificial snowmaking. Moreover, at the webcam sites studied, weekends and vacation periods had an equal or even stronger influence on daily visitor numbers than the current weather conditions. The extent to which weather impacts the (future) visitor numbers of a winter tourism destination must therefore be investigated individually and with the inclusion of non-meteorological variables influencing human behavior.


Assuntos
Recreação , Tempo (Meteorologia) , Humanos , Estações do Ano , Neve , Temperatura
18.
J Neuroeng Rehabil ; 21(1): 106, 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38909239

RESUMO

BACKGROUND: Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual's gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait. METHOD: The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset. RESULTS: VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications. CONCLUSION: The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.


Assuntos
Acidentes por Quedas , Aprendizado Profundo , Caminhada , Acidentes por Quedas/prevenção & controle , Humanos , Medição de Risco/métodos , Caminhada/fisiologia , Masculino , Feminino , Adulto , Tecnologia de Rastreamento Ocular , Movimentos Oculares/fisiologia , Marcha/fisiologia , Gravação em Vídeo , Adulto Jovem
19.
Artigo em Inglês | MEDLINE | ID: mdl-39015056

RESUMO

PURPOSE: This study aims to evaluate the effectiveness of advanced deep learning models, specifically YOLOv8 and EfficientNetV2, in detecting meniscal tears on magnetic resonance imaging (MRI) using a relatively small data set. METHOD: Our data set consisted of MRI studies from 642 knees-two orthopaedic surgeons labelled and annotated the MR images. The training pipeline included MRI scans of these knees. It was divided into two stages: initially, a deep learning algorithm called YOLO was employed to identify the meniscus location, and subsequently, the EfficientNetV2 deep learning architecture was utilized to detect meniscal tears. A concise report indicating the location and detection of a torn meniscus is provided at the end. RESULT: The YOLOv8 model achieved mean average precision at 50% threshold (mAP@50) scores of 0.98 in the sagittal view and 0.985 in the coronal view. Similarly, the EfficientNetV2 model obtained area under the curve scores of 0.97 and 0.98 in the sagittal and coronal views, respectively. These outstanding results demonstrate exceptional performance in meniscus localization and tear detection. CONCLUSION: Despite a relatively small data set, state-of-the-art models like YOLOv8 and EfficientNetV2 yielded promising results. This artificial intelligence system enhances meniscal injury diagnosis by generating instant structured reports, facilitating faster image interpretation and reducing physician workload. LEVEL OF EVIDENCE: Level III.

20.
Sensors (Basel) ; 24(11)2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38894387

RESUMO

As remote sensing technology has advanced, the use of satellites and similar technologies has become increasingly prevalent in daily life. Now, it plays a crucial role in hydrology, agriculture, and geography. Nevertheless, because of the distinct qualities of remote sensing, including expansive scenes and small, densely packed targets, there are many challenges in detecting remote sensing objects. Those challenges lead to insufficient accuracy in remote sensing object detection. Consequently, developing a new model is essential to enhance the identification capabilities for objects in remote sensing imagery. To solve these constraints, we have designed the OD-YOLO approach that uses multi-scale feature fusion to improve the performance of the YOLOv8n model in small target detection. Firstly, traditional convolutions have poor recognition capabilities for certain geometric shapes. Therefore, in this paper, we introduce the Detection Refinement Module (DRmodule) into the backbone architecture. This module utilizes Deformable Convolutional Networks and the Hybrid Attention Transformer to strengthen the model's capability for feature extraction from geometric shapes and blurred objects effectively. Meanwhile, based on the Feature Pyramid Network of YOLO, at the head of the model framework, this paper enhances the detection capability by introducing a Dynamic Head to strengthen the fusion of different scales features in the feature pyramid. Additionally, to address the issue of detecting small objects in remote sensing images, this paper specifically designs the OIoU loss function to finely describe the difference between the detection box and the true box, further enhancing model performance. Experiments on the VisDrone dataset show that OD-YOLO surpasses the compared models by at least 5.2% in mAP50 and 4.4% in mAP75, and experiments on the Foggy Cityscapes dataset demonstrated that OD-YOLO improved mAP by 6.5%, demonstrating outstanding results in tasks related to remote sensing images and adverse weather object detection. This work not only advances the research in remote sensing image analysis, but also provides effective technical support for the practical deployment of future remote sensing applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA