Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
J Comput Soc Sci ; 6(1): 315-337, 2023.
Article in English | MEDLINE | ID: mdl-36593882

ABSTRACT

This study presents a framework to study quantitatively geographical visual diversities of urban neighborhood from a large collection of street-view images using an Artificial Intelligence (AI)-based image segmentation technique. A variety of diversity indices are computed from the extracted visual semantics. They are utilized to discover the relationships between urban visual appearance and socio-demographic variables. This study also validates the reliability of the method with human evaluators. The methodology and results obtained from this study can potentially be used to study urban features, locate houses, establish services, and better operate municipalities.

2.
J Comput Soc Sci ; 5(2): 1257-1279, 2022.
Article in English | MEDLINE | ID: mdl-35602668

ABSTRACT

VisualCommunity is a platform designed to support community or neighborhood scale research. The platform integrates mobile, AI, visualization techniques, along with tools to help domain researchers, practitioners, and students collecting and working with spatialized video and geo-narratives. These data, which provide granular spatialized imagery and associated context gained through expert commentary have previously provided value in understanding various community-scale challenges. This paper further enhances this work AI-based image processing and speech transcription tools available in VisualCommunity, allowing for the easy exploration of the acquired semantic and visual information about the area under investigation. In this paper we describe the specific advances through use case examples including COVID-19 related scenarios.

3.
IEEE Trans Vis Comput Graph ; 28(1): 1019-1029, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34596546

ABSTRACT

Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system.

4.
J Comput Soc Sci ; 4(2): 813-837, 2021.
Article in English | MEDLINE | ID: mdl-33718652

ABSTRACT

The complex interrelationship between the built environment and social problems is often described but frequently lacks the data and analytical framework to explore the potential of such a relationship in different applications. We address this gap using a machine learning (ML) approach to study whether street-level built environment visuals can be used to classify locations with high-crime and lower-crime activities. For training the ML model, spatialized expert narratives are used to label different locations. Semantic categories (e.g., road, sky, greenery, etc.) are extracted from Google Street View (GSV) images of those locations through a deep learning image segmentation algorithm. From these, local visual representatives are generated and used to train the classification model. The model is applied to two cities in the U.S. to predict the locations as being linked to high crime. Results show our model can predict high- and lower-crime areas with high accuracies (above 98% and 95% in first and second test cities, accordingly).

5.
Sensors (Basel) ; 20(23)2020 Dec 07.
Article in English | MEDLINE | ID: mdl-33297389

ABSTRACT

Human Activity Recognition (HAR) using embedded sensors in smartphones and smartwatch has gained popularity in extensive applications in health care monitoring of elderly people, security purpose, robotics, monitoring employees in the industry, and others. However, human behavior analysis using the accelerometer and gyroscope data are typically grounded on supervised classification techniques, where models are showing sub-optimal performance for qualitative and quantitative features. Considering this factor, this paper proposes an efficient and reduce dimension feature extraction model for human activity recognition. In this feature extraction technique, the Enveloped Power Spectrum (EPS) is used for extracting impulse components of the signal using frequency domain analysis which is more robust and noise insensitive. The Linear Discriminant Analysis (LDA) is used as dimensionality reduction procedure to extract the minimum number of discriminant features from envelop spectrum for human activity recognition (HAR). The extracted features are used for human activity recognition using Multi-class Support Vector Machine (MCSVM). The proposed model was evaluated by using two benchmark datasets, i.e., the UCI-HAR and DU-MD datasets. This model is compared with other state-of-the-art methods and the model is outperformed.


Subject(s)
Human Activities , Support Vector Machine , Accelerometry , Aged , Algorithms , Discriminant Analysis , Humans , Smartphone
SELECTION OF CITATIONS
SEARCH DETAIL