Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 305
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(10)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38793890

RESUMEN

In our digitally driven society, advances in software and hardware to capture video data allow extensive gathering and analysis of large datasets. This has stimulated interest in extracting information from video data, such as buildings and urban streets, to enhance understanding of the environment. Urban buildings and streets, as essential parts of cities, carry valuable information relevant to daily life. Extracting features from these elements and integrating them with technologies such as VR and AR can contribute to more intelligent and personalized urban public services. Despite its potential benefits, collecting videos of urban environments introduces challenges because of the presence of dynamic objects. The varying shape of the target building in each frame necessitates careful selection to ensure the extraction of quality features. To address this problem, we propose a novel evaluation metric that considers the video-inpainting-restoration quality and the relevance of the target object, considering minimizing areas with cars, maximizing areas with the target building, and minimizing overlapping areas. This metric extends existing video-inpainting-evaluation metrics by considering the relevance of the target object and interconnectivity between objects. We conducted experiment to validate the proposed metrics using real-world datasets from Japanese cities Sapporo and Yokohama. The experiment results demonstrate feasibility of selecting video frames conducive to building feature extraction.

2.
Sensors (Basel) ; 24(10)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38793888

RESUMEN

In this study, we propose a classification method of expert-novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To address this issue, the proposed method introduces a confidence-aware node-level attention mechanism into a spatiotemporal attention GCN (STA-GCN) for the classification of expert-novice levels. Consequently, our method can contrast the attention value of each node on the basis of the confidence measure of the classification, which solves the problem of classification approaches using attention mechanisms and realizes accurate classification. Furthermore, because the expert-novice levels have ordinalities, using a classification model that considers ordinalities improves the classification performance. The proposed method involves a model that minimizes a loss function that considers the ordinalities of classes to be classified. By implementing the above approaches, the expert-novice level classification performance is improved.

3.
Sensors (Basel) ; 24(10)2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38793943

RESUMEN

The advancements in deep learning have significantly enhanced the capability of image generation models to produce images aligned with human intentions. However, training and adapting these models to new data and tasks remain challenging because of their complexity and the risk of catastrophic forgetting. This study proposes a method for addressing these challenges involving the application of class-replacement techniques within a continual learning framework. This method utilizes selective amnesia (SA) to efficiently replace existing classes with new ones while retaining crucial information. This approach improves the model's adaptability to evolving data environments while preventing the loss of past information. We conducted a detailed evaluation of class-replacement techniques, examining their impact on the "class incremental learning" performance of models and exploring their applicability in various scenarios. The experimental results demonstrated that our proposed method could enhance the learning efficiency and long-term performance of image generation models. This study broadens the application scope of image generation technology and supports the continual improvement and adaptability of corresponding models.

4.
Sensors (Basel) ; 24(11)2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38894233

RESUMEN

This paper proposes a multimodal Transformer model that uses time-series data to detect and predict winter road surface conditions. For detecting or predicting road surface conditions, the previous approach focuses on the cooperative use of multiple modalities as inputs, e.g., images captured by fixed-point cameras (road surface images) and auxiliary data related to road surface conditions under simple modality integration. Although such an approach achieves performance improvement compared to the method using only images or auxiliary data, there is a demand for further consideration of the way to integrate heterogeneous modalities. The proposed method realizes a more effective modality integration using a cross-attention mechanism and time-series processing. Concretely, when integrating multiple modalities, feature compensation through mutual complementation between modalities is realized through a feature integration technique based on a cross-attention mechanism, and the representational ability of the integrated features is enhanced. In addition, by introducing time-series processing for the input data across several timesteps, it is possible to consider the temporal changes in the road surface conditions. Experiments are conducted for both detection and prediction tasks using data corresponding to the current winter condition and data corresponding to a few hours after the current winter condition, respectively. The experimental results verify the effectiveness of the proposed method for both tasks. In addition to the construction of the classification model for winter road surface conditions, we first attempt to visualize the classification results, especially the prediction results, through the image style transfer model as supplemental extended experiments on image generation at the end of the paper.

5.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38339636

RESUMEN

Text-guided image editing has been highlighted in the fields of computer vision and natural language processing in recent years. The approach takes an image and text prompt as input and aims to edit the image in accordance with the text prompt while preserving text-unrelated regions. The results of text-guided image editing differ depending on the way the text prompt is represented, even if it has the same meaning. It is up to the user to decide which result best matches the intended use of the edited image. This paper assumes a situation in which edited images are posted to social media and proposes a novel text-guided image editing method to help the edited images gain attention from a greater audience. In the proposed method, we apply the pre-trained text-guided image editing method and obtain multiple edited images from the multiple text prompts generated from a large language model. The proposed method leverages the novel model that predicts post scores representing engagement rates and selects one image that will gain the most attention from the audience on social media among these edited images. Subject experiments on a dataset of real Instagram posts demonstrate that the edited images of the proposed method accurately reflect the content of the text prompts and provide a positive impression to the audience on social media compared to those of previous text-guided image editing methods.


Asunto(s)
Medios de Comunicación Sociales , Humanos , Lenguaje , Procesamiento de Lenguaje Natural
6.
J Aging Phys Act ; 32(1): 1-7, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-37295783

RESUMEN

We investigated the association between the cross-sectional area (CSA) of the gluteus medius muscle (GMM) and activities of daily living in patients with hip fractures. This retrospective cohort study comprised 111 patients aged ≥65 years who underwent hip fracture rehabilitation. The CSA of the GMM was measured using computed tomography scans in the early stages of hospitalization. The group with decreased CSA of the GMM had a median GMI ≤17 cm2/m2 for male patients and ≤16 cm2/m2 for female patients. Patients in the group with decreased CSA of the GMM had lower functional independence measure gains than those in the control group. After adjusting for confounders, we found that decreased CSA of the GMM was significantly associated with lower functional independence measure gains (ß: -0.432, p < .001). In patients with hip fractures, decreased CSA of the GMM was associated with decreased activities of daily living.


Asunto(s)
Actividades Cotidianas , Fracturas de Cadera , Humanos , Masculino , Femenino , Anciano , Estudios Retrospectivos , Músculo Esquelético/diagnóstico por imagen , Músculo Esquelético/fisiología , Muslo
7.
J Urol ; 209(1): 187-197, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36067387

RESUMEN

PURPOSE: This study aimed to evaluate the usefulness of the LDN-PSA (LacdiNAc-glycosylated-prostate specific antigen) in detecting clinically significant prostate cancer in patients suspected of having clinically significant prostate cancer on multiparametric magnetic resonance imaging. MATERIALS AND METHODS: Patients with prostate specific antigen levels ranging between 3.0 ng/mL and 20 ng/mL and suspicious lesions with PI-RADS (Prostate Imaging-Reporting and Data System) category ≥3 were included prospectively. The LDN-PSA was measured using an automated 2-step Wisteria floribunda agglutinin lectin-anti-prostate specific antigen antibody sandwich immunoassay. RESULTS: Two hundred four patients were included. Clinically significant prostate cancer was detected in 105 patients. On multivariable logistic regression analysis, prostate specific antigen density (OR 1.61, P = .010), LDN-PSAD (OR 1.04, P = .012), highest PI-RADS category (3 vs 4, 5; OR 14.5, P < .0001), and location of the lesion with highest PI-RADS category (transition zone vs peripheral zone) (OR 0.34, P = .009) were significant risk factors for detecting clinically significant prostate cancer. Among the patients with the highest PI-RADS category 3 (n=113), clinically significant prostate cancer was detected in 28 patients. On multivariable logistic regression analysis to predict the detection of clinically significant prostate cancer in patients with the highest PI-RADS category 3, age (OR 1.10, P = .026) and LDN-PSAD (OR 1.07, P < .0001) were risk factors for detecting clinically significant prostate cancer. CONCLUSIONS: LDN-PSAD would be a biomarker for detecting clinically significant prostate cancer in patients with prostate specific antigen levels ≤20 ng/mL and suspicious lesions with PI-RADS category ≥3. The use of LDN-PSAD as an adjunct to the use of prostate specific antigen levels would avoid unnecessary biopsies in patients with the highest PI-RADS category 3. Multi-institutional studies with large population are recommended.


Asunto(s)
Antígeno Prostático Específico , Neoplasias de la Próstata , Humanos , Masculino , Imagen por Resonancia Magnética , Neoplasias de la Próstata/diagnóstico por imagen
8.
Cerebrovasc Dis ; 52(1): 75-80, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35917807

RESUMEN

BACKGROUND: The peak oxygen consumption (V.O2peak) and blood hemoglobin concentration [Hb] are lower in stroke patients than in age-matched healthy subjects. The ability of skeletal muscles to extract oxygen is diminished after stroke. We hypothesized that the oxygen extraction capacity of skeletal muscles in stroke patients depends on [Hb]. To test the hypothesis, we determined the relationship between V.O2peak and total hemoglobin mass (tHb-mass) in stroke patients. METHODS: The subjects were 19 stroke patients (age: 74 ± 2, mean ± SD, 10 males) and 11 age-matched normal subjects (age 76 ± 3, 6 males). Plasma volume (PV) and V.O2peak were measured on the same day. PV was measured using Evans Blue dye dilution method. Blood volume (BV) was calculated from PV and hematocrit, while tHb-mass was estimated from BV and [Hb]. Each subject underwent cardiopulmonary exercise test on a bicycle ergometer using a V.O2peak respiratory gas analyzer. RESULTS: There were no differences in age, height, and weight between the two groups. V.O2peak was lower in stroke patients than in the control. BV and tHb mass were not significantly different between the two groups, but [Hb] was significantly lower in stroke patients. In stroke patients, V.O2peak correlated significantly with tHb-mass (r = 0.497, p < 0.05), but not with BV. CONCLUSION: Our results suggested that low [Hb] seems to contribute to V.O2peak in stroke patients. The significant correlation between tHb-mass and V.O2peak suggested that treatment to improve [Hb] can potentially improve V.O2peak in stroke patients.


Asunto(s)
Consumo de Oxígeno , Accidente Cerebrovascular , Anciano , Humanos , Masculino , Prueba de Esfuerzo , Hemoglobinas/metabolismo , Oxígeno , Consumo de Oxígeno/fisiología , Accidente Cerebrovascular/diagnóstico , Femenino
9.
BMC Urol ; 23(1): 85, 2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37158841

RESUMEN

BACKGROUND: Collecting system entry in robot-assisted partial nephrectomy may occur even in cases showing a low N factor in the R.E.N.A.L nephrometry score. Therefore, in this study, we focused on the tumor contact surface area with the adjacent renal parenchyma and attempted to construct a novel predictive model for collecting system entry. METHODS: Among 190 patients who underwent robot-assisted partial nephrectomy at our institution from 2015 to 2021, 94 patients with a low N factor (1-2) were analyzed. Contact surface was measured with three-dimensional imaging software and defined as the C factor, classified as C1, < 10 cm [2]; C2, ≥ 10 and < 15 cm [2]; and C3: ≥ 15 cm [2]. Additionally, a modified R factor (mR) was classified as mR1, < 20 mm; mR2, ≥ 20 and < 40 mm; and mR3, ≥ 40 mm. We discussed the factors influencing collecting system entry, including the C factor, and created a novel collecting system entry predictive model. RESULTS: Collecting system entry was observed in 32 patients with a low N factor (34%). The C factor was the only independent predictive factor for collecting system entry in multivariate regression analysis (odds ratio: 4.195, 95% CI: 2.160-8.146, p < 0.0001). Models including the C factor showed better discriminative power than the models without the C factor. CONCLUSIONS: The new predictive model, including the C factor in N1-2 cases, may be beneficial, considering its indication for preoperative ureteral catheter placement in patients undergoing robot-assisted partial nephrectomy.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Nefrectomía , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Estudios Retrospectivos , Carcinoma de Células Renales/cirugía , Neoplasias Renales/cirugía
10.
Spinal Cord ; 61(2): 139-144, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36241700

RESUMEN

STUDY DESIGN: Experimental study. OBJECTIVES: To compare lipid profiles during moderate-intensity exercise between persons with cervical spinal cord injuries (SCIC) and able-bodied controls (AB). SETTING: Wakayama Medical University, Japan. METHODS: Six participants with SCIC and six AB performed 30-min arm-crank exercise at 50% VO2peak. Blood samples were collected before (PRE), immediately (POST), and 60 min after exercise (REC). Concentrations of serum free fatty acids ([FFA]s), total ketone bodies ([tKB]s), acetoacetic acid ([AcAc]s), insulin ([Ins]s), and plasma catecholamines and glucose ([Glc]p) were assessed. RESULTS: Catecholamine concentrations in SCIC were lower than AB throughout the experiment (P < 0.001) and remained unchanged, while increased at POST in AB (P < 0.01). [FFA]s remained unchanged in both groups with no differences between groups. [tKB]s in SCIC tended to increase at REC from PRE (P = 0.043), while remaining unchanged in AB (P > 0.42). [AcAc]s in SCIC increased at REC from PRE and POST (P < 0.01) while remaining unchanged in AB (interactions of Group × Time P = 0.014). [Glc]p and [Ins]s were comparable between the groups throughout the study. CONCLUSION: Serum ketone bodies in SCIC increased after exercise while remaining unchanged in AB, suggesting that suppressed uptakes of serum ketone bodies from blood to the muscles in SCIC would partially contribute the increased serum ketones.


Asunto(s)
Médula Cervical , Traumatismos de la Médula Espinal , Humanos , Traumatismos de la Médula Espinal/diagnóstico , Estudios Prospectivos , Cetonas , Cuerpos Cetónicos , Catecolaminas
11.
Sensors (Basel) ; 23(3)2023 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-36772095

RESUMEN

Auxiliary clinical diagnosis has been researched to solve unevenly and insufficiently distributed clinical resources. However, auxiliary diagnosis is still dominated by human physicians, and how to make intelligent systems more involved in the diagnosis process is gradually becoming a concern. An interactive automated clinical diagnosis with a question-answering system and a question generation system can capture a patient's conditions from multiple perspectives with less physician involvement by asking different questions to drive and guide the diagnosis. This clinical diagnosis process requires diverse information to evaluate a patient from different perspectives to obtain an accurate diagnosis. Recently proposed medical question generation systems have not considered diversity. Thus, we propose a diversity learning-based visual question generation model using a multi-latent space to generate informative question sets from medical images. The proposed method generates various questions by embedding visual and language information in different latent spaces, whose diversity is trained by our newly proposed loss. We have also added control over the categories of generated questions, making the generated questions directional. Furthermore, we use a new metric named similarity to accurately evaluate the proposed model's performance. The experimental results on the Slake and VQA-RAD datasets demonstrate that the proposed method can generate questions with diverse information. Our model works with an answering model for interactive automated clinical diagnosis and generates datasets to replace the process of annotation that incurs huge labor costs.


Asunto(s)
Procesamiento de Lenguaje Natural , Semántica , Humanos , Lenguaje
12.
Sensors (Basel) ; 23(10)2023 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-37430712

RESUMEN

In this paper, we propose a hierarchical multi-modal multi-label attribute classification model for anime illustrations using a graph convolutional network (GCN). Our focus is on the challenging task of multi-label attribute classification, which requires capturing subtle features intentionally highlighted by creators of anime illustrations. To address the hierarchical nature of these attributes, we leverage hierarchical clustering and hierarchical label assignments to organize the attribute information into a hierarchical feature. The proposed GCN-based model effectively utilizes this hierarchical feature to achieve high accuracy in multi-label attribute classification. The contributions of the proposed method are as follows. Firstly, we introduce GCN to the multi-label attribute classification task of anime illustrations, enabling the capturing of more comprehensive relationships between attributes from their co-occurrence. Secondly, we capture subordinate relationships among the attributes by adopting hierarchical clustering and hierarchical label assignment. Lastly, we construct a hierarchical structure of attributes that appear more frequently in anime illustrations based on certain rules derived from previous studies, which helps to reflect the relationships between different attributes. The experimental results on multiple datasets show that the proposed method is effective and extensible by comparing it with some existing methods, including the state-of-the-art method.

13.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-37571685

RESUMEN

Zero-shot neural decoding aims to decode image categories, which were not previously trained, from functional magnetic resonance imaging (fMRI) activity evoked when a person views images. However, having insufficient training data due to the difficulty in collecting fMRI data causes poor generalization capability. Thus, models suffer from the projection domain shift problem when novel target categories are decoded. In this paper, we propose a zero-shot neural decoding approach with semi-supervised multi-view embedding. We introduce the semi-supervised approach that utilizes additional images related to the target categories without fMRI activity patterns. Furthermore, we project fMRI activity patterns into a multi-view embedding space, i.e., visual and semantic feature spaces of viewed images to effectively exploit the complementary information. We define several source and target groups whose image categories are very different and verify the zero-shot neural decoding performance. The experimental results demonstrate that the proposed approach rectifies the projection domain shift problem and outperforms existing methods.

14.
Sensors (Basel) ; 23(9)2023 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-37177744

RESUMEN

This study proposes a novel off-screen sound separation method based on audio-visual pre-training. In the field of audio-visual analysis, researchers have leveraged visual information for audio manipulation tasks, such as sound source separation. Although such audio manipulation tasks are based on correspondences between audio and video, these correspondences are not always established. Specifically, sounds coming from outside a screen have no audio-visual correspondences and thus interfere with conventional audio-visual learning. The proposed method separates such off-screen sounds based on their arrival directions using binaural audio, which provides us with three-dimensional sensation. Furthermore, we propose a new pre-training method that can consider the off-screen space and use the obtained representation to improve off-screen sound separation. Consequently, the proposed method can separate off-screen sounds irrespective of the direction from which they arrive. We conducted our evaluation using generated video data to circumvent the problem of difficulty in collecting ground truth for off-screen sounds. We confirmed the effectiveness of our methods through off-screen sound detection and separation tasks.

15.
Sensors (Basel) ; 23(9)2023 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-37177712

RESUMEN

In soccer, quantitatively evaluating the performance of players and teams is essential to improve tactical coaching and players' decision-making abilities. To achieve this, some methods use predicted probabilities of shoot event occurrences to quantify player performances, but conventional shoot prediction models have not performed well and have failed to consider the reliability of the event probability. This paper proposes a novel method that effectively utilizes players' spatio-temporal relations and prediction uncertainty to predict shoot event occurrences with greater accuracy and robustness. Specifically, we represent players' relations as a complete bipartite graph, which effectively incorporates soccer domain knowledge, and capture latent features by applying a graph convolutional recurrent neural network (GCRNN) to the constructed graph. Our model utilizes a Bayesian neural network to predict the probability of shoot event occurrence, considering spatio-temporal relations between players and prediction uncertainty. In our experiments, we confirmed that the proposed method outperformed several other methods in terms of prediction performance, and we found that considering players' distances significantly affects the prediction accuracy.

16.
Sensors (Basel) ; 23(23)2023 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-38067982

RESUMEN

Traffic sign recognition is a complex and challenging yet popular problem that can assist drivers on the road and reduce traffic accidents. Most existing methods for traffic sign recognition use convolutional neural networks (CNNs) and can achieve high recognition accuracy. However, these methods first require a large number of carefully crafted traffic sign datasets for the training process. Moreover, since traffic signs differ in each country and there is a variety of traffic signs, these methods need to be fine-tuned when recognizing new traffic sign categories. To address these issues, we propose a traffic sign matching method for zero-shot recognition. Our proposed method can perform traffic sign recognition without training data by directly matching the similarity of target and template traffic sign images. Our method uses the midlevel features of CNNs to obtain robust feature representations of traffic signs without additional training or fine-tuning. We discovered that midlevel features improve the accuracy of zero-shot traffic sign recognition. The proposed method achieves promising recognition results on the German Traffic Sign Recognition Benchmark open dataset and a real-world dataset taken from Sapporo City, Japan.

17.
Sensors (Basel) ; 23(22)2023 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-38005673

RESUMEN

At present, text-guided image manipulation is a notable subject of study in the vision and language field. Given an image and text as inputs, these methods aim to manipulate the image according to the text, while preserving text-irrelevant regions. Although there has been extensive research to improve the versatility and performance of text-guided image manipulation, research on its performance evaluation is inadequate. This study proposes Manipulation Direction (MD), a logical and robust metric, which evaluates the performance of text-guided image manipulation by focusing on changes between image and text modalities. Specifically, we define MD as the consistency of changes between images and texts occurring before and after manipulation. By using MD to evaluate the performance of text-guided image manipulation, we can comprehensively evaluate how an image has changed before and after the image manipulation and whether this change agrees with the text. Extensive experiments on Multi-Modal-CelebA-HQ and Caltech-UCSD Birds confirmed that there was an impressive correlation between our calculated MD scores and subjective scores for the manipulated images compared to the existing metrics.

18.
Sensors (Basel) ; 23(3)2023 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-36772694

RESUMEN

This study presents a method for distress image classification in road infrastructures introducing self-supervised learning. Self-supervised learning is an unsupervised learning method that does not require class labels. This learning method can reduce annotation efforts and allow the application of machine learning to a large number of unlabeled images. We propose a novel distress image classification method using contrastive learning, which is a type of self-supervised learning. Contrastive learning provides image domain-specific representation, constraining such that similar images are embedded nearby in the latent space. We augment the single input distress image into multiple images by image transformations and construct the latent space, in which the augmented images are embedded close to each other. This provides a domain-specific representation of the damage in road infrastructure using a large number of unlabeled distress images. Finally, the representation obtained by contrastive learning is used to improve the distress image classification performance. The obtained contrastive learning model parameters are used for the distress image classification model. We realize the successful distress image representation by utilizing unlabeled distress images, which have been difficult to use in the past. In the experiments, we use the distress images obtained from the real world to verify the effectiveness of the proposed method for various distress types and confirm the performance improvement.

19.
J Aging Phys Act ; 31(1): 75-80, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-35894998

RESUMEN

This study aimed to evaluate the relationship between improvement in activities of daily living (ADL) and cognitive status during rehabilitation and assess factors associated with ADL improvement among older patients undergoing rehabilitation after hip fractures. This retrospective cohort study comprised 306 patients aged ≥80 years who underwent hip fracture rehabilitation. The functional independence measure gain during rehabilitation was significantly lower in the group with abnormal cognition than in the group with normal cognition. Mini-Mental State Examination, Charlson Comorbidity Index, daily duration of rehabilitation, and length of hospitalization for rehabilitation were independent factors associated with functional independence measure gain during rehabilitation in the multivariate regression analysis. Although older patients with cognitive impairment had lower ADL improvements during hip fracture rehabilitation, such patients may be able to improve their ADL by undergoing intensive and long rehabilitation programs. They should not refrain from such rehabilitation programs due to older age, fracture, and cognitive impairment.


Asunto(s)
Trastornos del Conocimiento , Disfunción Cognitiva , Fracturas de Cadera , Humanos , Actividades Cotidianas , Estudios Retrospectivos , Fracturas de Cadera/complicaciones , Fracturas de Cadera/psicología , Fracturas de Cadera/rehabilitación
20.
J Aging Phys Act ; 31(6): 965-971, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37343947

RESUMEN

This study evaluated the relationship between the muscle mass of the gluteus medius (GM) and skeletal muscle mass index (SMI) measured in patients with hip fractures. In this study, 141 patients with hip fractures were divided into those with high or low SMI. The GM index (GMI) was calculated by dividing the GM by the square of the height in meters. The correlation between GMI and SMI was subsequently analyzed, and cutoff values for determining the loss of skeletal muscle mass were calculated using the receiver operating characteristic curve. GMI and SMI showed a positive correlation for both sexes (male: r = .890, female: r = .626, p < .001). The GMI cutoff values were 19.460 cm2/m2 for males and 17.850 cm2/m2 for females. Skeletal muscle mass evaluation of the GM could contribute to hip fracture recovery by improving mobility and facilitating the early diagnosis of loss of SMM.


Asunto(s)
Fracturas de Cadera , Músculo Esquelético , Humanos , Masculino , Femenino , Anciano , Músculo Esquelético/fisiología , Muslo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA