Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 20(7)2020 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-32276462

RESUMO

In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model (VAM) has been proposed that is learned from eye tracking data. In particular, eye-tracking data of adults and children observing five paintings with similar characteristics have been collected. The images are selected by CH experts and are-the three "Ideal Cities" (Urbino, Baltimore and Berlin), the Inlaid chest in the National Gallery of Marche and Wooden panel in the "Studiolo del Duca" with Marche view. These pictures have been recognized by experts as having analogous features thus providing coherent visual stimuli. Our proposed method combines a new coordinates representation from eye sequences by using Geometric Algebra with a deep learning model for automated recognition (to identify, differentiate, or authenticate individuals) of people by the attention focus of distinctive eye movement patterns. The experiments were conducted by comparing five Deep Convolutional Neural Networks (DCNNs), yield high accuracy (more than 80 %), demonstrating the effectiveness and suitability of the proposed approach in identifying adults and children as museums' visitors.

2.
Sensors (Basel) ; 19(6)2019 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-30901817

RESUMO

In increasingly hyper-connected societies, where individuals rely on short and fast online communications to consume information, museums face a significant survival challenge. Collaborations between scientists and museums suggest that the use of the technological framework known as Internet of Things (IoT) will be a key player in tackling this challenge. IoT can be used to gather and analyse visitor generated data, leading to data-driven insights that can fuel novel, adaptive and engaging museum experiences. We used an IoT implementation-a sensor network installed in the physical space of a museum-to look at how single visitors chose to enter and spend time in the different rooms of a curated exhibition. We collected a sparse, non-overlapping dataset of individual visits. Using various statistical analyses, we found that visitor attention span was very short. People visited five out of twenty rooms on average, and spent a median of two minutes in each room. However, the patterns of choice and time spent in rooms were not random. Indeed, they could be described in terms of a set of linearly separable visit patterns we obtained using principal component analysis. These results are encouraging for future interdisciplinary research that seeks to leverage IoT to get numerical proxies for people attention inside the museum, and use this information to fuel the next generation of possible museum interactions. Such interactions will based on rich, non-intrusive and diverse IoT driven conversation, dynamically tailored to visitors.

3.
Sensors (Basel) ; 18(10)2018 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-30326647

RESUMO

Person re-identification is an important topic in retail, scene monitoring, human-computer interaction, people counting, ambient assisted living and many other application fields. A dataset for person re-identification TVPR (Top View Person Re-Identification) based on a number of significant features derived from both depth and color images has been previously built. This dataset uses an RGB-D camera in a top-view configuration to extract anthropometric features for the recognition of people in view of the camera, reducing the problem of occlusions while being privacy preserving. In this paper, we introduce a machine learning method for person re-identification using the TVPR dataset. In particular, we propose the combination of multiple k-nearest neighbor classifiers based on different distance functions and feature subsets derived from depth and color images. Moreover, the neighborhood component feature selection is used to learn the depth features' weighting vector by minimizing the leave-one-out regularized training error. The classification process is performed by selecting the first passage under the camera for training and using the others as the testing set. Experimental results show that the proposed methodology outperforms standard supervised classifiers widely used for the re-identification task. This improvement encourages the application of this approach in the retail context in order to improve retail analytics, customer service and shopping space management.

4.
Neural Netw ; 175: 106278, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38581809

RESUMO

In the field of deep learning, large quantities of data are typically required to effectively train models. This challenge has given rise to techniques like zero-shot learning (ZSL), which trains models on a set of "seen" classes and evaluates them on a set of "unseen" classes. Although ZSL has shown considerable potential, particularly with the employment of generative methods, its generalizability to real-world scenarios remains uncertain. The hypothesis of this work is that the performance of ZSL models is systematically influenced by the chosen "splits"; in particular, the statistical properties of the classes and attributes used in training. In this paper, we test this hypothesis by introducing the concepts of generalizability and robustness in attribute-based ZSL and carry out a variety of experiments to stress-test ZSL models against different splits. Our aim is to lay the groundwork for future research on ZSL models' generalizability, robustness, and practical applications. We evaluate the accuracy of state-of-the-art models on benchmark datasets and identify consistent trends in generalizability and robustness. We analyze how these properties vary based on the dataset type, differentiating between coarse- and fine-grained datasets, and our findings indicate significant room for improvement in both generalizability and robustness. Furthermore, our results demonstrate the effectiveness of dimensionality reduction techniques in improving the performance of state-of-the-art models in fine-grained datasets.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Humanos , Algoritmos , Aprendizado de Máquina
5.
Sci Rep ; 14(1): 15081, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956250

RESUMO

The illicit traffic of cultural goods remains a persistent global challenge, despite the proliferation of comprehensive legislative frameworks developed to address and prevent cultural property crimes. Online platforms, especially social media and e-commerce, have facilitated illegal trade and pose significant challenges for law enforcement agencies. To address this issue, the European project SIGNIFICANCE was born, with the aim of combating illicit traffic of Cultural Heritage (CH) goods. This paper presents the outcomes of the project, introducing a user-friendly platform that employs Artificial Intelligence (AI) and Deep learning (DL) to prevent and combat illicit activities. The platform enables authorities to identify, track, and block illegal activities in the online domain, thereby aiding successful prosecutions of criminal networks. Moreover, it incorporates an ontology-based approach, providing comprehensive information on the cultural significance, provenance, and legal status of identified artefacts. This enables users to access valuable contextual information during the scraping and classification phases, facilitating informed decision-making and targeted actions. To accomplish these objectives, computationally intensive tasks are executed on the HPC CyClone infrastructure, optimizing computing resources, time, and cost efficiency. Notably, the infrastructure supports algorithm modelling and training, as well as web, dark web and social media scraping and data classification. Preliminary results indicate a 10-15% increase in the identification of illicit artifacts, demonstrating the platform's effectiveness in enhancing law enforcement capabilities.

6.
Soc Netw Anal Min ; 12(1): 33, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35154503

RESUMO

Social networks are increasingly used for discussing all kinds of topics, including those related to politics, serving as a virtual arena. Consequently, analysing online conversations, for example, to predict election outcomes, is becoming a popular and challenging research area. On social networking sites, citizens express themselves spontaneously regarding political topics, often driven by specific events in social life. Real-time analysis of social media can provide valuable feedback and insights to both politicians and news agencies. In this paper, we discuss the design and implementation of a system for tracking and analysing social media. The SocMINT system provides an easy-to-use, visual dashboard to monitor the discussion on specific topics, to capture trends in communities and, by iteratively applying multidimensional data analysis and filtering, to deeply analyse posts and influencers. SocMINT aggregates data from multiple social sources and performs sentiment analysis on textual, visual and mixed content via a specifically designed neural network architecture. The system was applied in a real context by administrative staff of a political party to effectively analyse candidates' political communication on Facebook, Instagram and Twitter and the related online community reactions and discussion. In the paper, we report on this real-world case study, showing how the system meaningfully captures trends in public opinion, comparing the main KPIs provided by SocMINT with the outcomes of traditional polls.

7.
PLoS One ; 16(7): e0253868, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34197526

RESUMO

Vehicles' trajectory prediction is a topic with growing interest in recent years, as there are applications in several domains ranging from autonomous driving to traffic congestion prediction and urban planning. Predicting trajectories starting from Floating Car Data (FCD) is a complex task that comes with different challenges, namely Vehicle to Infrastructure (V2I) interaction, Vehicle to Vehicle (V2V) interaction, multimodality, and generalizability. These challenges, especially, have not been completely explored by state-of-the-art works. In particular, multimodality and generalizability have been neglected the most, and this work attempts to fill this gap by proposing and defining new datasets, metrics, and methods to help understand and predict vehicle trajectories. We propose and compare Deep Learning models based on Long Short-Term Memory and Generative Adversarial Network architectures; in particular, our GAN-3 model can be used to generate multiple predictions in multimodal scenarios. These approaches are evaluated with our newly proposed error metrics N-ADE and N-FDE, which normalize some biases in the standard Average Displacement Error (ADE) and Final Displacement Error (FDE) metrics. Experiments have been conducted using newly collected datasets in four large Italian cities (Rome, Milan, Naples, and Turin), considering different trajectory lengths to analyze error growth over a larger number of time-steps. The results prove that, although LSTM-based models are superior in unimodal scenarios, generative models perform best in those where the effects of multimodality are higher. Space-time and geographical analysis are performed, to prove the suitability of the proposed methodology for real cases and management services.


Assuntos
Condução de Veículo/estatística & dados numéricos , Aprendizado Profundo , Previsões/métodos , Veículos Automotores/estatística & dados numéricos , Cidades/estatística & dados numéricos
8.
IEEE J Transl Eng Health Med ; 8: 3000112, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33150095

RESUMO

Objective Decision support systems (DSS) have been developed and promoted for their potential to improve quality of health care. However, there is a lack of common clinical strategy and a poor management of clinical resources and erroneous implementation of preventive medicine. Methods To overcome this problem, this work proposed an integrated system that relies on the creation and sharing of a database extracted from GPs' Electronic Health Records (EHRs) within the Netmedica Italian (NMI) cloud infrastructure. Although the proposed system is a pilot application specifically tailored for improving the chronic Type 2 Diabetes (T2D) care it could be easily targeted to effectively manage different chronic-diseases. The proposed DSS is based on EHR structure used by GPs in their daily activities following the most updated guidelines in data protection and sharing. The DSS is equipped with a Machine Learning (ML) method for analyzing the shared EHRs and thus tackling the high variability of EHRs. A novel set of T2D care-quality indicators are used specifically to determine the economic incentives and the T2D features are presented as predictors of the proposed ML approach. Results The EHRs from 41237 T2D patients were analyzed. No additional data collection, with respect to the standard clinical practice, was required. The DSS exhibited competitive performance (up to an overall accuracy of 98%±2% and macro-recall of 96%±1%) for classifying chronic care quality across the different follow-up phases. The chronic care quality model brought to a significant increase (up to 12%) of the T2D patients without complications. For GPs who agreed to use the proposed system, there was an economic incentive. A further bonus was assigned when performance targets are achieved. Conclusions The quality care evaluation in a clinical use-case scenario demonstrated how the empowerment of the GPs through the use of the platform (integrating the proposed DSS), along with the economic incentives, may speed up the improvement of care.

9.
Int J Med Inform ; 129: 267-274, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31445266

RESUMO

Today, e-health has entered the everyday work flow in the form of a variety of healthcare providers. General practitioners (GPs) are the largest category in the public sanitary service, with about 60,000 GPs throughout Italy. Here, we present the Nu.Sa. project, operating in Italy, which has established one of the first GP healthcare information systems based on heterogeneous data sources. This system connects all providers and provides full access to clinical and health-related data. This goal is achieved through a novel technological infrastructure for data sharing based on interoperability specifications recognised at the national level for messages transmitted from GP providers to the central domain. All data standards are publicly available and subjected to continuous improvement. Currently, the system manages more than 5,000 GPs with about 5,500,000 patients in total, with 4,700,000 pharmacological e-prescriptions and 1,700,000 e-prescriptions for laboratory exams per month. Hence, the Nu.Sa. healthcare system that has the capacity to gather standardised data from 16 different form of GP software, connecting patients, GPs, healthcare organisations, and healthcare professionals across a large and heterogeneous territory through the implementation of data standards with a strong focus on cybersecurity. Results show that the application of this scenario at a national level, with novel metrics on the architecture's scalability and the software's usability, affect the sanitary system and on GPs' professional activities.


Assuntos
Clínicos Gerais , Segurança Computacional , Atenção à Saúde , Humanos , Disseminação de Informação , Armazenamento e Recuperação da Informação , Itália
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA