Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(3)2021 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-33573170

RESUMO

Velocity-based training is a contemporary method used by sports coaches to prescribe the optimal loading based on the velocity of movement of a load lifted. The most employed and accurate instruments to monitor velocity are linear position transducers. Alternatively, smartphone apps compute mean velocity after each execution by manual on-screen digitizing, introducing human error. In this paper, a video-based instrument delivering unattended, real-time measures of barbell velocity with a smartphone high-speed camera has been developed. A custom image-processing algorithm allows for the detection of reference points of a multipower machine to autocalibrate and automatically track barbell markers to give real-time kinematic-derived parameters. Validity and reliability were studied by comparing the simultaneous measurement of 160 repetitions of back squat lifts executed by 20 athletes with the proposed instrument and a validated linear position transducer, used as a criterion. The video system produced practically identical range, velocity, force, and power outcomes to the criterion with low and proportional systematic bias and random errors. Our results suggest that the developed video system is a valid, reliable, and trustworthy instrument for measuring velocity and derived variables accurately with practical implications for use by coaches and practitioners.


Assuntos
Treinamento Resistido , Smartphone , Levantamento de Peso , Fenômenos Biomecânicos , Humanos , Reprodutibilidade dos Testes , Gravação em Vídeo
2.
Biomed Eng Online ; 18(1): 29, 2019 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-30894178

RESUMO

BACKGROUND: Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, convolutional neural networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. METHODS: In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. RESULTS: Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92-97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors' knowledge, all publicly available glaucoma-labelled databases. CONCLUSIONS: These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and test the five CNNs are publicly available, which could be used as a testbed for further comparisons.


Assuntos
Fundo de Olho , Glaucoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Bases de Dados Factuais , Humanos , Fatores de Tempo
3.
Artigo em Inglês | MEDLINE | ID: mdl-36011491

RESUMO

The crouching or prone-on-the-ground observation heights suggested by the My Jump app are not practical in some settings, so users usually hold smartphones in a standing posture. This study aimed to analyze the reliability of My Jump 2 from the standardized and standing positions. Two identical smartphones recorded 195 countermovement jump executions from 39 active adult athletes at heights 30 and 90 cm, which were randomly assessed by three experienced observers. The between-observer reliability was high for both observation heights separately (ICC~0.99; SEM~0.6 cm; CV~1.3%) with low systematic (0.1 cm) and random (±1.7 cm) errors. The within-observer reliability for the three observers comparing the standardized and standing positions was high (ICC~0.99; SEM~0.7 cm; CV~1.4%), showing errors of 0.3 ± 1.9 cm. Observer 2 was the least accurate out of the three, although reliability remained similar to the levels of agreement found in the literature. The reliability of the mean observations in each height also revealed high reliability (ICC = 0.993; SEM = 0.51 cm; CV = 1.05%, error 0.32 ± 1.4 cm). Therefore, the reliability in the standing position did not change with respect to the standardized position, so it can be regarded as an alternative method to using My Jump 2 with practical added benefits.


Assuntos
Desempenho Atlético , Adulto , Atletas , Teste de Esforço/métodos , Humanos , Reprodutibilidade dos Testes , Smartphone
4.
Comput Methods Programs Biomed ; 198: 105788, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33130492

RESUMO

BACKGROUND AND OBJECTIVE: Optical coherence tomography (OCT) is a useful technique to monitor retinal layer state both in humans and animal models. Automated OCT analysis in rats is of great relevance to study possible toxic effect of drugs and other treatments before human trials. In this paper, two different approaches to detect the most significant retinal layers in a rat OCT image are presented. METHODS: One approach is based on a combination of local horizontal intensity profiles along with a new proposed variant of watershed transformation and the other is built upon an encoder-decoder convolutional network architecture. RESULTS: After a wide validation, an averaged absolute distance error of 3.77 ± 2.59 and 1.90 ± 0.91 µm is achieved by both approaches, respectively, on a batch of the rat OCT database. After a second test of the deep-learning-based method using an unseen batch of the database, an averaged absolute distance error of 2.67 ± 1.25 µm is obtained. The rat OCT database used in this paper is made publicly available to facilitate further comparisons. CONCLUSIONS: Based on the obtained results, it was demonstrated the competitiveness of the first approach since outperforms the commercial Insight image segmentation software (Phoenix Research Labs) as well as its utility to generate labelled images for validation purposes speeding significantly up the ground truth generation process. Regarding the second approach, the deep-learning-based method improves the results achieved by the more conventional method and also by other state-of-the-art techniques. In addition, it was verified that the results of the proposed network can be generalized to new rat OCT images.


Assuntos
Roedores , Tomografia de Coerência Óptica , Animais , Redes Neurais de Computação , Ratos , Retina/diagnóstico por imagem , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA