Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(13)2022 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-35808316

RESUMEN

Video captioning via encoder-decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video's temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.


Asunto(s)
Atención , Redes Neurales de la Computación
2.
J Korean Med Sci ; 36(27): e175, 2021 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-34254471

RESUMEN

BACKGROUND: Rapid triage reduces the patients' stay time at an emergency department (ED). The Korean Triage Acuity Scale (KTAS) is mandatorily applied at EDs in South Korea. For rapid triage, we studied machine learning-based triage systems composed of a speech recognition model and natural language processing-based classification. METHODS: We simulated 762 triage cases that consisted of 18 classes with six types of the main symptom (chest pain, dyspnea, fever, stroke, abdominal pain, and headache) and three levels of KTAS. In addition, we recorded conversations between emergency patients and clinicians during the simulation. We used speech recognition models to transcribe the conversation. Bidirectional Encoder Representation from Transformers (BERT), support vector machine (SVM), random forest (RF), and k-nearest neighbors (KNN) were used for KTAS and symptom classification. Additionally, we evaluated the Shapley Additive exPlanations (SHAP) values of features to interpret the classifiers. RESULTS: The character error rate of the speech recognition model was reduced to 25.21% through transfer learning. With auto-transcribed scripts, support vector machine (area under the receiver operating characteristic curve [AUROC], 0.86; 95% confidence interval [CI], 0.81-0.9), KNN (AUROC, 0.89; 95% CI, 0.85-0.93), RF (AUROC, 0.86; 95% CI, 0.82-0.9) and BERT (AUROC, 0.82; 95% CI, 0.75-0.87) achieved excellent classification performance. Based on SHAP, we found "stress", "pain score point", "fever", "breath", "head" and "chest" were the important vocabularies for determining KTAS and symptoms. CONCLUSION: We demonstrated the potential of an automatic KTAS classification system using speech recognition models, machine learning and BERT-based classifiers.


Asunto(s)
Aprendizaje Profundo , Percepción del Habla , Triaje/métodos , Adulto , Anciano , Medicina de Emergencia/métodos , Servicio de Urgencia en Hospital , Humanos , Persona de Mediana Edad , Procesamiento de Lenguaje Natural , Simulación de Paciente , Prueba de Estudio Conceptual , República de Corea , Estudios Retrospectivos , Triaje/organización & administración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA