Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
PNAS Nexus ; 1(4): pgac176, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36714864

RESUMO

The purpose of this study was to compare basketball performance markers 1 y prior to initial severe lower extremity injury, including ankle, knee, and hip injuries, to 1 and 2 y following injury during the regular National Basketball Association (NBA) season. Publicly available data were extracted through a reproducible extraction computed programmed process. Eligible participants were NBA players with at least three seasons played between 2008 and 2019, with a time-loss injury reported during the study period. Basketball performance was evaluated for season minutes, points, and rebounds. Prevalence of return to performance and linear regressions were calculated. A total of 285 athletes sustained a severe lower extremity injury. A total of 196 (69%) played for 1 y and 130 (45%) played for 2 y following the injury. A total of 58 (30%) players participated in a similar number of games and 57 (29%) scored similar points 1 y following injury. A total of 48 (37%) participated in a similar number of games and 55 (42%) scored a similar number of points 2 y following injury. Fewer than half of basketball players who suffered a severe lower extremity injury were participating at the NBA level 2 y following injury, with similar findings for groin/hip/thigh, knee, and ankle injuries. Fewer than half of players were performing at previous preinjury levels 2 y following injury. Suffering a severe lower extremity injury may be a prognostic factor that can assist sports medicine professionals to educate and set performance expectations for NBA players.

2.
Orthop J Sports Med ; 9(6): 23259671211004094, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34179200

RESUMO

BACKGROUND: There is a paucity of current data describing injuries in professional basketball players. Utilizing publicly available injury data will allow for greater transparency for stakeholders to use the data as a shared resource to create future basketball injury prevention programs. PURPOSE: To describe injury and illness incidence, severity, and temporal trends in National Basketball Association (NBA) players. Among those who develop time-loss injury or illness, we estimated severity based on games missed because of injury or illness. STUDY DESIGN: Descriptive epidemiology study. METHODS: Publicly available NBA data were extracted through a reproducible computer-programmed process from the 2008 to 2019 seasons. Data were externally validated by 2 independent reviewers through other publicly available data sources. Injury and illness were calculated per 1000 athlete game-exposures (AGEs). Injury severity was calculated as games missed because of injury or illness. Injury and illness data were stratified by body part, position, severity (slight, minor, moderate, or severe), month, and year. RESULTS: A total of 1369 players played a total of 302,018 player-games, with a total of 5375 injuries and illnesses. The overall injury and illness incidence was 17.80 per 1000 AGEs. The median injury severity was 3 games (interquartile range, 0-6 games) missed per injury. Overall, 33% of injuries were classified as slight; 26%, as minor; 26%, as moderate; and 15%, as severe. The ankle (2.57 injuries/1000 AGEs), knee (2.44 injuries/1000 AGEs), groin/hip/thigh (1.99 injuries/1000 AGEs), and illness (1.85 illnesses/1000 AGEs) had the greatest incidence of injury and illness. Neither injury or illness incidence nor severity was different among basketball playing positions. Injury incidence demonstrated increasing incremental trends with season progression. Injuries were similar throughout the 11-year reporting period, except for a substantial increase in the lockout-shortened 2012 season. CONCLUSION: The ankle and knee had the greatest injury incidence. Injury incidence was similar among basketball positions. Injury incidence increased throughout the season, demonstrating the potential relationship between player load and injury incidence.

3.
JAMA Netw Open ; 4(10): e2128199, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-34605914

RESUMO

Importance: There is limited research investigating injury and illness among professional basketball players during their rookie season. By improving the understanding of injury incidence and risk specific to rookie players, sports medicine clinicians may be able to further individualize injury mitigation programs that address the unique needs of rookie players. Objective: To compare incidence and rate ratio (RR) of injury and illness among professional National Basketball Association (NBA) players in their rookie season with veteran players and to explore the association of sustaining an injury rookie season with career longevity. Design, Setting, and Participants: This retrospective cohort study used an online data repository and extracted publicly available data about NBA players between the 2007 and 2008 season to the 2018 and 2019 season. Available data for initial injury and all subsequent injuries were extracted during this time frame. Exposures: Injury and illness based on injury status during the rookie season of professional NBA players. Main Outcomes and Measures: Injury and illness incidence and RR. Association of injury during the rookie season with career longevity was assessed via Poisson regressions. Results: Of the 12 basketball seasons analyzed, 904 NBA players were included (mean [SD] age, 24.6 [3.9] years; body mass index, 24.8 [1.8]). The injury and illness incidence for rookie players was 14.28 per 1000 athlete game exposures (AGEs). Among all body regions, ankle injuries had the greatest injury incidence among players injured during their rookie season (3.17 [95% CI, 3.15-3.19] per 1000 AGEs). Rookie athletes demonstrated higher RR compared with veterans across multiple regions of the body (ankle: 1.32; 95% CI, 1.12 to 1.52; foot/toe: 1.29; 95% CI, 0.97 to 1.61; shoulder/arm/elbow: 1.43; 95% CI, 1.10 to 1.77; head/neck: 1.21; 95% CI, 0.61 to 1.81; concussions: 2.39; 95% CI, 1.89 to 2.90; illness: 1.14; 95% CI, 0.87 to 1.40), and demonstrated a higher rate of initial injuries compared with veteran players (1.41; 95% CI, 1.29 to 1.53). Players who sustained an injury rookie season demonstrated an unadjusted decrease in total seasons played (-0.4 [95% CI, -0.5 to -0.3] log years; P < .001), but this decrease was not observed within adjusted analysis (0.1 [95% CI, -0.1 to 0.2] log years; P = .36). Conclusions and Relevance: In this study, rookie athletes demonstrated the highest injury incidence at the ankle and increased RR across multiple regions. These findings may reflect differences in preseason conditioning or load variables impacting rookie athletes and warrant further investigation. Future research is needed to determine the association of cumulative injury burden vs a singular injury event on career longevity.


Assuntos
Atletas/estatística & dados numéricos , Traumatismos em Atletas/diagnóstico , Basquetebol/lesões , Fatores de Tempo , Adolescente , Adulto , Traumatismos em Atletas/epidemiologia , Traumatismos em Atletas/etiologia , Basquetebol/estatística & dados numéricos , Feminino , Humanos , Masculino , Volta ao Esporte
4.
Sch Psychol Q ; 29(2): 171-181, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24274156

RESUMO

Although generalizability theory has been used increasingly in recent years to investigate the dependability of behavioral estimates, many of these studies have relied on use of general education populations as opposed to those students who are most likely to be referred for assessment due to problematic classroom behavior (e.g., inattention, disruption). The current study investigated the degree to which differences exist in terms of the magnitude of both variance component estimates and dependability coefficients between students nominated by their teachers for Tier 2 interventions due to classroom behavior problems and a general classroom sample (i.e., including both nominated and non-nominated students). The academic engagement levels of 16 (8 nominated, 8 non-nominated) middle school students were measured by 4 trained observers using momentary time-sampling procedures. A series of G and D studies were then conducted to determine whether the 2 groups were similar in terms of the (a) distribution of rating variance and (b) number of observations needed to achieve an adequate level of dependability. Results suggested that the behavior of students in the teacher-nominated group fluctuated more across time and that roughly twice as many observations would therefore be required to yield similar levels of dependability compared with the combined group. These findings highlight the importance of constructing samples of students that are comparable to those students with whom the measurement method is likely to be applied when conducting psychometric investigations of behavioral assessment tools.


Assuntos
Logro , Comportamento Infantil/psicologia , Instituições Acadêmicas , Estudantes/psicologia , Criança , Feminino , Humanos , Masculino , Psicometria , Reprodutibilidade dos Testes
5.
Sch Psychol Q ; 27(4): 187-197, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23294233

RESUMO

Although direct observation is one of the most frequently used assessment methods by school psychologists, studies have shown that the number of observations needed to obtain a dependable estimate of student behavior may be impractical. Because direct observation may be used to inform important decisions about students, it is crucial that data be reliable. Preliminary research has suggested that dependability may be improved by extending the length of individual observations. The purpose of the current study was, therefore, to examine how changes in observational duration affect the dependability of student engagement data. Twenty seventh grade students were each observed for 30-min across 2 days during math instruction. Generalizability theory was then used to calculate reliability-like coefficients for the purposes of intraindividual decision making. Across days, acceptable levels of dependability for progress monitoring (i.e., .70) were achieved through two 30-min observations, three 15-min observations, or four to five 10-min observations. Acceptable levels of dependability for higher stakes decisions (i.e., .80) required over an hour of cumulative observation time. Within a given day, a 15 minute observation was found to be adequate for making low-stakes decisions whereas an hour long observation was necessary for high-stakes decision making. Limitations of the current study and implications for research and practice are discussed.


Assuntos
Comportamento do Adolescente/psicologia , Psicologia Educacional/métodos , Projetos de Pesquisa , Estudantes/psicologia , Estudantes/estatística & dados numéricos , Adolescente , Feminino , Humanos , Masculino , New England , Variações Dependentes do Observador , Psicometria , Reprodutibilidade dos Testes , Fatores de Tempo , População Urbana
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA