Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.462
Filtrar
1.
Ann Surg Oncol ; 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251516

RESUMO

BACKGROUND: Given increased neoadjuvant therapy use in early-stage, hormone receptor (HR)-positive/HER2-negative breast cancer, we sought to quantify likelihood of breast-conserving surgery (BCS) after neoadjuvant chemotherapy (NACT) or endocrine therapy (NET) as a function of ER%/PR%/Ki-67%, 21-gene recurrence scores (RS), or 70-gene risk groups. METHODS: We analyzed the 2010-2020 National Cancer Database. Surgery was categorized as "mastectomy/BCS." Logistic regression was performed. Adjusted odds ratios (AOR) were per 10-unit increase in ER%/PR%/Ki-67%. RESULTS: Overall, 42.3% underwent BCS after NACT, whereas 64.0% did after NET. Increasing ER% (AOR = 0.96, 95% confidence interval [CI] 0.94-0.97) or PR% (AOR=0.98, 95% CI 0.96-0.99) was associated with lower odds of BCS after NACT. Increasing Ki-67% was associated with greater odds of BCS (AOR = 1.07, 95% CI 1.04-1.10). Breast-conserving surgery rates increased by ~20 percentage points, with Ki-67% ≥15 or RS >20. Patients with a low (43.0%, AOR = 0.50, 95% CI 0.29-0.88) or intermediate (46.4%, AOR = 0.58, 95% CI 0.41-0.81) RS were less likely than patients with a high RS (65.0%) to undergo BCS after NACT. Increasing ER% was associated with higher odds of BCS after NET (AOR = 1.09, 95% CI 1.01-1.17). Breast-conserving surgery rates increased by ~20 percentage points between ER <50% and >80%. In both cohorts, the odds of BCS were similar between 70-gene low-risk and high-risk groups. Asian or uninsured patients had lower odds of BCS. CONCLUSIONS: Neoadjuvant chemotherapy is unlikely to downstage tumors with a low-intermediate RS, higher ER%/PR%, or lower Ki-67%. Breast-conserving surgery after NET was most dependent on ER%. Findings could facilitate treatment decision-making based on tumor biology and racial/socioeconomic disparities and improve patient counseling on the likelihood of successful BCS.

2.
EBioMedicine ; 107: 105276, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39197222

RESUMO

BACKGROUND: Deployment and access to state-of-the-art precision medicine technologies remains a fundamental challenge in providing equitable global cancer care in low-resource settings. The expansion of digital pathology in recent years and its potential interface with diagnostic artificial intelligence algorithms provides an opportunity to democratize access to personalized medicine. Current digital pathology workstations, however, cost thousands to hundreds of thousands of dollars. As cancer incidence rises in many low- and middle-income countries, the validation and implementation of low-cost automated diagnostic tools will be crucial to helping healthcare providers manage the growing burden of cancer. METHODS: Here we describe a low-cost ($230) workstation for digital slide capture and computational analysis composed of open-source components. We analyze the predictive performance of deep learning models when they are used to evaluate pathology images captured using this open-source workstation versus images captured using common, significantly more expensive hardware. Validation studies assessed model performance on three distinct datasets and predictive models: head and neck squamous cell carcinoma (HPV positive versus HPV negative), lung cancer (adenocarcinoma versus squamous cell carcinoma), and breast cancer (invasive ductal carcinoma versus invasive lobular carcinoma). FINDINGS: When compared to traditional pathology image capture methods, low-cost digital slide capture and analysis with the open-source workstation, including the low-cost microscope device, was associated with model performance of comparable accuracy for breast, lung, and HNSCC classification. At the patient level of analysis, AUROC was 0.84 for HNSCC HPV status prediction, 1.0 for lung cancer subtype prediction, and 0.80 for breast cancer classification. INTERPRETATION: Our ability to maintain model performance despite decreased image quality and low-power computational hardware demonstrates that it is feasible to massively reduce costs associated with deploying deep learning models for digital pathology applications. Improving access to cutting-edge diagnostic tools may provide an avenue for reducing disparities in cancer care between high- and low-income regions. FUNDING: Funding for this project including personnel support was provided via grants from NIH/NCIR25-CA240134, NIH/NCIU01-CA243075, NIH/NIDCRR56-DE030958, NIH/NCIR01-CA276652, NIH/NCIK08-CA283261, NIH/NCI-SOAR25CA240134, SU2C (Stand Up to Cancer) Fanconi Anemia Research Fund - Farrah Fawcett Foundation Head and Neck Cancer Research Team Grant, and the European UnionHorizon Program (I3LUNG).

3.
Orthop Clin North Am ; 55(4): xiii-xiv, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39216956
4.
Analyst ; 149(18): 4536-4552, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39171617

RESUMO

Neurobiological research relies heavily on imaging techniques, such as fluorescence microscopy, to understand neurological function and disease processes. However, the number and variety of fluorescent probes available for ex vivo tissue section imaging limits the advance of research in the field. In this review, we outline the current range of fluorescent probes that are available to researchers for ex vivo brain section imaging, including their physical and chemical characteristics, staining targets, and examples of discoveries for which they have been used. This review is organised into sections based on the biological target of the probe, including subcellular organelles, chemical species (e.g., labile metal ions), and pathological phenomenon (e.g., degenerating cells, aggregated proteins). We hope to inspire further development in this field, given the considerable benefits to be gained by the greater availability of suitably sensitive probes that have specificity for important brain tissue targets.


Assuntos
Encéfalo , Corantes Fluorescentes , Corantes Fluorescentes/química , Encéfalo/diagnóstico por imagem , Humanos , Animais , Microscopia de Fluorescência/métodos , Neurociências/métodos
5.
Arch Bone Jt Surg ; 12(8): 558-566, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39211566

RESUMO

Objectives: Reverse total shoulder arthroplasty (rTSA) has shown success in the treatment of end-stage glenohumeral pathology. However, one major shortcoming has been the lack of internal rotation (IR), which can have significant functional consequences. Much research has been conducted to maximize IR after rTSA, but the literature is unclear which measurement of IR represents the "gold standard" between vertebral level and goniometer-based measurements. Methods: Patients were prospectively enrolled into one of three groups: postoperative from rTSA, subacromial pain (SA), and normal. IR measurements were obtained either by the vertebral body level, by which radiographic markers indicated the highest level that the patient was able to reach on the body midline; or by using a goniometer while the shoulder was in 90-degree abduction as the patient stood upright. Results: Comparisons between the radiographic vertebral level and goniometer IR measurements showed significant correlations within the normal (r = - 0.43, P = 0.02) and SA pain groups (r = - 0.44, P = 0.02). The rTSA group did not quite reach statistical significance (P = 0.11), but had a moderate correlation coefficient (r = - 0.33). Accuracy of visual IR measurements was also significant. All rTSA group vertebral level measurements were within two vertebral levels, while only 84.6% of IR measurements by goniometer were within 15 degrees. Visual vertebral level measurements were found to be more accurate for the SA pain group (86.2 vs 66.7%). Conclusion: A comparison of the two primary IR measurement methods for shoulders was shown to have a correlation. This would allow for direct comparison of different literature using only one measurement method. While the correlation is not yet strong enough to allow for conversion between the two measurement types, creating a matched cohort taking into account other factors may lead to the correlation reaching this point.

6.
Environ Pollut ; 360: 124700, 2024 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-39137875

RESUMO

Improper waste disposal or inadequate wastewater treatment can result in pharmaceuticals reaching water bodies, posing environmental hazards. In this study, crude extracts containing the laccase enzyme from Pleurotus florida, Pleurotus eryngii, and Pleurotus sajor caju were used to degrade the fluoroquinolone antibiotics (FQs) levofloxacin (LEV), norfloxacin (NOR), ciprofloxacin (CIP), ofloxacin (OFL), and enrofloxacin (ENR) in aqueous solutions. The results for the fungi derived laccase extracts were compared with those obtained using commercially sourced laccase. Proteomics analysis of the crude extracts confirmed the presence of laccase enzyme across all three tested species, with proteins matching those found in Trametes versicolor and Pleurotus ostreatus. In vivo studies were conducted using species pure lines of fungal whole cells. The highest degradation efficiency observed was 77.7% for LEV in the presence of P. sajor caju after 25 days of treatment. Degradation efficiencies ranged from approximately 60-72% for P. florida, 45-76% for P. eryngii, and 47-78% for P. sajor caju. A series of in vitro experiments were also conducted using crude extracts from the three species and outcomes compared with those obtained when commercial laccase was used confirmed laccase as the enzyme responsible for antibiotic removal. The degradation efficiencies in vitro surpassed those measured in vivo, ranging from approximately 91-98% for commercial laccase, 77-92% for P. florida, 76-92% for P. eryngii, and 78-88% for P. sajor caju. Liquid chromatography-high-resolution mass spectrometry (LC-MS/MS) identified the degradation products, indicating a consistent enzymatic degradation pathway targeting the piperazine moiety common to all tested FQs, irrespective of the initial antibiotic structure. Phytoplankton toxicity studies with Dunaliella tertiolecta were performed to aid in understanding the impact of emerging contaminants on ecosystems, and by-products were analysed for ecotoxicity to assess treatment efficacy. Laccase-mediated enzymatic oxidation shows promising results in reducing algal toxicity, notably with Pleurotus eryngii extract achieving a 97.7% decrease for CIP and a 90% decrease for LEV. These findings suggest the potential of these naturally sourced extracts in mitigating antibiotic contamination in aquatic ecosystems.

7.
Methodist Debakey Cardiovasc J ; 20(4): 76-87, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39184156

RESUMO

Heart failure (HF) affects millions of individuals and causes hundreds of thousands of deaths each year in the United States. Despite the public health burden, medical and device therapies for HF significantly improve clinical outcomes and, in a subset of patients, can cause reversal of abnormalities in cardiac structure and function, termed "myocardial recovery." By identifying novel patterns in high-dimensional data, artificial intelligence (AI) and machine learning (ML) algorithms can enhance the identification of key predictors and molecular drivers of myocardial recovery. Emerging research in the area has begun to demonstrate exciting results that could advance the standard of care. Although major obstacles remain to translate this technology to clinical practice, AI and ML hold the potential to usher in a new era of purposeful myocardial recovery programs based on precision medicine. In this review, we discuss applications of ML to the prediction of myocardial recovery, potential roles of ML in elucidating the mechanistic basis underlying recovery, barriers to the implementation of ML in clinical practice, and areas for future research.


Assuntos
Insuficiência Cardíaca , Aprendizado de Máquina , Valor Preditivo dos Testes , Recuperação de Função Fisiológica , Humanos , Insuficiência Cardíaca/fisiopatologia , Insuficiência Cardíaca/terapia , Insuficiência Cardíaca/diagnóstico , Medicina de Precisão , Resultado do Tratamento , Inteligência Artificial
8.
Artigo em Inglês | MEDLINE | ID: mdl-39079099

RESUMO

Pickleball is the fastest growing sport in the United States. People of all ages participate in the sport, with the most being aged 35 years or older. Pickleball is a paddle and racket sport with a smaller court size, lighter racket, and similar rules as tennis. From 2019 to 2021, the number of pickleball players increased from 3.3 to 4.8 million. Historically, as a sport grows in popularity, there tends to be a linear increase in injuries. This review compiles data from retrospective studies containing emergency department data and case reports of specific injuries sustained playing pickleball. One factor that could be perceived as favorable concerning injury risk is the smaller court size compared with tennis, although no correlation has been found between court size and rate of injury. The most common injuries presenting to the emergency department among pickleball players were muscle strains, joint sprains, and fractures. Men were three times more likely to sustain muscle strains and joint sprains while women were three times more likely to sustain fractures. As the sport continues to grow, the tracking of injury types and mechanisms of injury will become important in informing injury prevention strategies and improved safety for players.

9.
Artigo em Inglês | MEDLINE | ID: mdl-39058605

RESUMO

BACKGROUND: Antimicrobial resistance is a major public health threat, and new agents are needed. Computational approaches have been proposed to reduce the cost and time needed for compound screening. AIMS: A machine learning (ML) model was developed for the in silico screening of low molecular weight molecules. METHODS: We used the results of a high-throughput Caenorhabditis elegans methicillin-resistant Staphylococcus aureus (MRSA) liquid infection assay to develop ML models for compound prioritization and quality control. RESULTS: The compound prioritization model achieved an AUC of 0.795 with a sensitivity of 81% and a specificity of 70%. When applied to a validation set of 22,768 compounds, the model identified 81% of the active compounds identified by high-throughput screening (HTS) among only 30.6% of the total 22,768 compounds, resulting in a 2.67-fold increase in hit rate. When we retrained the model on all the compounds of the HTS dataset, it further identified 45 discordant molecules classified as non-hits by the HTS, with 42/45 (93%) having known antimicrobial activity. CONCLUSION: Our ML approach can be used to increase HTS efficiency by reducing the number of compounds that need to be physically screened and identifying potential missed hits, making HTS more accessible and reducing barriers to entry.

10.
Lancet Glob Health ; 12(8): e1323-e1330, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38976998

RESUMO

BACKGROUND: WHO estimates that more than 50 million people worldwide have epilepsy and 80% of cases are in low-income and middle-income countries. Most studies in Africa have focused on active convulsive epilepsy in rural areas, but there are few data in urban settings. We aimed to estimate the prevalence and spatial distribution of all epilepsies in two urban informal settlements in Nairobi, Kenya. METHODS: We did a two-stage population-based cross-sectional study of residents in a demographic surveillance system covering two informal settlements in Nairobi, Kenya (Korogocho and Viwandani). Stage 1 screened all household members using a validated epilepsy screening questionnaire to detect possible cases. In stage 2, those identified with possible seizures and a proportion of those screening negative were invited to local clinics for clinical and neurological assessments by a neurologist. Seizures were classified following the International League Against Epilepsy recommendations. We adjusted for attrition between the two stages using multiple imputations and for sensitivity by dividing estimates by the sensitivity value of the screening tool. Complementary log-log regression was used to assess prevalence differences by participant socio-demographics. FINDINGS: A total of 56 425 individuals were screened during stage 1 (between Sept 17 and Dec 23, 2021) during which 1126 were classified as potential epilepsy cases. A total of 873 were assessed by a neurologist in stage 2 (between April 12 and Aug 6, 2022) during which 528 were confirmed as epilepsy cases. 253 potential cases were not assessed by a neurologist due to attrition. 30 179 (53·5%) of the 56 425 individuals were male and 26 246 (46·5%) were female. The median age was 24 years (IQR 11-35). Attrition-adjusted and sensitivity-adjusted prevalence for all types of epilepsy was 11·9 cases per 1000 people (95% CI 11·0-12·8), convulsive epilepsy was 8·7 cases per 1000 people (8·0-9·6), and non-convulsive epilepsy was 3·2 cases per 1000 people (2·7-3·7). Overall prevalence was highest among separated or divorced individuals at 20·3 cases per 1000 people (95% CI 15·9-24·7), unemployed people at 18·8 cases per 1000 people (16·2-21·4), those with no formal education at 18·5 cases per 1000 people (16·3-20·7), and adolescents aged 13-18 years at 15·2 cases per 1000 people (12·0-18·5). The epilepsy diagnostic gap was 80%. INTERPRETATION: Epilepsy is common in urban informal settlements of Nairobi, with large diagnostic gaps. Targeted interventions are needed to increase early epilepsy detection, particularly among vulnerable groups, to enable prompt treatment and prevention of adverse social consequences. FUNDING: National Institute for Health Research using Official Development Assistance.


Assuntos
Epilepsia , População Urbana , Humanos , Quênia/epidemiologia , Epilepsia/epidemiologia , Feminino , Prevalência , Masculino , Adulto , Adolescente , Estudos Transversais , População Urbana/estatística & dados numéricos , Adulto Jovem , Criança , Pessoa de Meia-Idade , Pré-Escolar , Lactente
11.
Ann Surg Oncol ; 31(9): 5483-5486, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39003374

RESUMO

This is an executive summary of the most recent American Society for Radiation Oncology (ASTRO) guidelines on use of partial breast irradiation in early-stage breast cancer.In the conscientious pursuit of "right-sizing" the management of patients with early-stage breast cancer, there has been an emphasis on judicious de-escalation of therapy. A component of this paradigm shift is partial breast irradiation (PBI), an approach characterized by targeted radiation therapy (RT) to lumpectomy cavity margins rather than to the whole breast (i.e., whole breast irradiation [WBI]) after breast conservation surgery (BCS). The American Society for Radiation Oncology (ASTRO) recently completed a revision of its evidence-based guidelines for the application of PBI.1To accomplish this, recent PBI data were reviewed by panel members, including representatives of the American Society for Radiation Oncology (ASTRO), in collaboration with the American Society of Clinical Oncology (ASCO), and the Society of Surgical Oncology (SSO), which provided representatives and peer reviewers. The guideline was approved by the ASTRO Board of Directors and endorsed by the Canadian Association of Radiation Oncology, European Society for Radiotherapy and Oncology, Royal Australian and New Zealand College of Radiologists, and the Society of Surgical Oncology.The recommendations focused on indications for PBI as an alternative to WBI and technical considerations specific to PBI. This editorial provides a summary and comments on the updated ASTRO PBI guidelines, offering insights into the implications of these findings for clinical practice and multidisciplinary decision-making while underscoring technical considerations for optimal incorporation of PBI into patient care.


Assuntos
Neoplasias da Mama , Mastectomia Segmentar , Guias de Prática Clínica como Assunto , Humanos , Neoplasias da Mama/radioterapia , Neoplasias da Mama/cirurgia , Neoplasias da Mama/patologia , Feminino , Guias de Prática Clínica como Assunto/normas , Radioterapia (Especialidade)/normas , Radioterapia Adjuvante/normas , Radioterapia Adjuvante/métodos , Sociedades Médicas , Oncologia Cirúrgica/normas
12.
J Card Fail ; 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38997000

RESUMO

BACKGROUND: Sodium-glucose cotransporter-2 inhibitors (SGLT2is) have demonstrated benefit in patients with heart failure, but minimal data exist concerning the use of these medications in amyloid light-chain cardiomyopathy (AL-CM). We performed a retrospective study to assess the safety and efficacy of SGLT2is in AL-CM. METHODS: We queried our institutional registry and identified 27 patients with AL-CM who received SGLT2is. The safety analysis included all 27 patients and assessed SGLT2i-associated adverse events, hospitalizations and deaths. To decrease confounding, the efficacy analysis included only a subset of patients with stable disease (on stable anti-plasma cell therapy for ≥ 2 months prior to baseline and had achieved at least a hematologic Very Good Partial Response) and compared disease-marker changes in these patients (n = 17) with those of a contemporaneous untreated control cohort from our registry (n = 21). RESULTS: The mean age of the overall population was 68.6 (standard deviation 9.4) years. Of the patients, 7 (14.6%) had diabetes, and 19 (39.6%) had chronic kidney disease. In the safety analysis, the median follow-up time was 10.9 (interquartile range 7.2) months. Two (7.4%) patients discontinued SGLT2is due to hypovolemia and genital irritation, and 6 (22.2%) additional patients temporarily held SGLT2is due to an adverse event that is commonly related to volume depletion. There were 13 hospitalizations, all considered unrelated to SGLT2i use, and no deaths occurred. In the efficacy analysis, SGLT2i-treated patients had more severe disease at baseline than controls, demonstrating significantly higher median troponin-T and loop diuretic dosage (P < 0.05). Compared with controls, SGLT2i treatment was associated with significantly greater reductions in loop diuretic dosage (P < 0.001) and NTproBNP levels (P = 0.033) across 3-, 6- and 12-month follow-up timepoints. SGLT2i treatment was also associated with a significantly greater reduction in mean arterial pressure at 12 months (P = 0.031) but not at other timepoints. No significant differences were observed in changes in weight, eGFR, troponin-T, proteinuria, or albumin levels. CONCLUSIONS: In this small-scale retrospective study, we demonstrate that SGLT2is are well tolerated by most patients with AL-CM, but volume depletion symptoms may limit continuous use. SGLT2is may aid management of congestion in AL-CM, as evidenced by reduced diuretic dosage and NTproBNP levels without adverse renal effects. Larger long-term studies are needed to build on our findings.

13.
Front Netw Physiol ; 4: 1211413, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38948084

RESUMO

Algorithms for the detection of COVID-19 illness from wearable sensor devices tend to implicitly treat the disease as causing a stereotyped (and therefore recognizable) deviation from healthy physiology. In contrast, a substantial diversity of bodily responses to SARS-CoV-2 infection have been reported in the clinical milieu. This raises the question of how to characterize the diversity of illness manifestations, and whether such characterization could reveal meaningful relationships across different illness manifestations. Here, we present a framework motivated by information theory to generate quantified maps of illness presentation, which we term "manifestations," as resolved by continuous physiological data from a wearable device (Oura Ring). We test this framework on five physiological data streams (heart rate, heart rate variability, respiratory rate, metabolic activity, and sleep temperature) assessed at the time of reported illness onset in a previously reported COVID-19-positive cohort (N = 73). We find that the number of distinct manifestations are few in this cohort, compared to the space of all possible manifestations. In addition, manifestation frequency correlates with the rough number of symptoms reported by a given individual, over a several-day period prior to their imputed onset of illness. These findings suggest that information-theoretic approaches can be used to sort COVID-19 illness manifestations into types with real-world value. This proof of concept supports the use of information-theoretic approaches to map illness manifestations from continuous physiological data. Such approaches could likely inform algorithm design and real-time treatment decisions if developed on large, diverse samples.

14.
Front Med (Lausanne) ; 11: 1380148, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38966538

RESUMO

Background: The use of large language models (LLM) has recently gained popularity in diverse areas, including answering questions posted by patients as well as medical professionals. Objective: To evaluate the performance and limitations of LLMs in providing the correct diagnosis for a complex clinical case. Design: Seventy-five consecutive clinical cases were selected from the Massachusetts General Hospital Case Records, and differential diagnoses were generated by OpenAI's GPT3.5 and 4 models. Results: The mean number of diagnoses provided by the Massachusetts General Hospital case discussants was 16.77, by GPT3.5 30 and by GPT4 15.45 (p < 0.0001). GPT4 was more frequently able to list the correct diagnosis as first (22% versus 20% with GPT3.5, p = 0.86), provide the correct diagnosis among the top three generated diagnoses (42% versus 24%, p = 0.075). GPT4 was better at providing the correct diagnosis, when the different diagnoses were classified into groups according to the medical specialty and include the correct diagnosis at any point in the differential list (68% versus 48%, p = 0.0063). GPT4 provided a differential list that was more similar to the list provided by the case discussants than GPT3.5 (Jaccard Similarity Index 0.22 versus 0.12, p = 0.001). Inclusion of the correct diagnosis in the generated differential was correlated with PubMed articles matching the diagnosis (OR 1.40, 95% CI 1.25-1.56 for GPT3.5, OR 1.25, 95% CI 1.13-1.40 for GPT4), but not with disease incidence. Conclusions and relevance: The GPT4 model was able to generate a differential diagnosis list with the correct diagnosis in approximately two thirds of cases, but the most likely diagnosis was often incorrect for both models. In its current state, this tool can at most be used as an aid to expand on potential diagnostic considerations for a case, and future LLMs should be trained which account for the discrepancy between disease incidence and availability in the literature.

15.
JMIR Cancer ; 10: e55438, 2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39024570

RESUMO

BACKGROUND: Since the COVID-19 pandemic began, we have seen rapid growth in telemedicine use. However, telehealth care and services are not equally distributed, and not all patients with breast cancer have equal access across US regions. There are notable gaps in existing literature regarding the influence of neighborhood-level socioeconomic status on telemedicine use in patients with breast cancer and oncology services offered through telehealth versus in-person visits. OBJECTIVE: We assessed the relationship between neighborhood socioeconomic disadvantage and telemedicine use among patients with breast cancer and examined differential provisions of oncology services between telehealth and in-person visits. METHODS: Neighborhood socioeconomic disadvantage was measured using the Area Deprivation Index (ADI), with higher scores indicating greater disadvantages. Telemedicine and in-person visits were defined as having had a telehealth and in-person visit with a provider, respectively, in the past 12 months. Multivariable logistic regression was performed to examine the association between ADI and telemedicine use. The McNemar test was used to assess match-paired data on types of oncology services comparing telehealth and in-person visits. RESULTS: The mean age of the patients with breast cancer (n=1163) was 61.8 (SD 12.0) years; 4.58% (52/1161) identified as Asian, 19.72% (229/1161) as Black, 3.01% (35/1161) as Hispanic, and 72.78% (845/1161) as White. Overall, 35.96% (416/1157) had a telemedicine visit in the past 12 months. Of these patients, 65% (266/409) had a videoconference visit only, 22.7% (93/409) had a telephone visit only, and 12.2% (50/409) had visits by both videoconference and telephone. Higher ADI scores were associated with a lower likelihood of telemedicine use (adjusted odds ratio [AOR] 0.89, 95% CI 0.82-0.97). Black (AOR 2.38, 95% CI 1.41-4.00) and Hispanic (AOR 2.65, 95% CI 1.07-6.58) patients had greater odds of telemedicine use than White patients. Compared to patients with high school or less education, those with an associate's degree (AOR 2.67, 95% CI 1.33-5.35), a bachelor's degree (AOR 2.75, 95% CI 1.38-5.48), or a graduate or professional degree (AOR 2.57, 95% CI 1.31-5.04) had higher odds of telemedicine use in the past 12 months. There were no significant differences in providing treatment consultation (45/405, 11.1% vs 55/405, 13.6%; P=.32) or cancer genetic counseling (11/405, 2.7% vs 19/405, 4.7%; P=.14) between telehealth and in-person visits. Of the telemedicine users, 95.8% (390/407) reported being somewhat to extremely satisfied, and 61.8% (254/411) were likely or very likely to continue using telemedicine. CONCLUSIONS: In this study of a multiethnic cohort of patients with breast cancer, our findings suggest that neighborhood-level socioeconomic disparities exist in telemedicine use and that telehealth visits could be used to provide treatment consultation and cancer genetic counseling. Oncology programs should address these disparities and needs to improve care delivery and achieve telehealth equity for their patient populations.

16.
Sports Health ; : 19417381241258482, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38877729

RESUMO

BACKGROUND: Understanding the epidemiology of injuries to athletes is essential to informing injury prevention efforts. HYPOTHESIS: The incidence and impact of basketball-related injuries among National Basketball Association (NBA) players from 2013-2014 through 2018-2019 is relatively stable over time. STUDY DESIGN: Descriptive epidemiology study. LEVEL OF EVIDENCE: Level 3. METHODS: Injuries from 2013-2014 through 2018-2019 were analyzed using the NBA Injury and Illness Database from an electronic medical record system. Descriptive statistics were calculated for injuries by season, game-loss, and onset. Incidence rates were estimated using Poisson models and linear trend tests. RESULTS: Between 552 and 606 players participated in ≥1 game per season during the study. Annual injury incidence ranged from 1550 to 1892, with 33.6% to 38.5% resulting in a missed NBA game. Game-loss injury rates ranged from 5.6 to 7.0 injuries per 10,000 player-minutes from 2014-2015 through 2018-2019 (P = 0.19); the rate was lower in 2013-2014 (5.0 injuries per 10,000 player-minutes), partly due to increased preseason injury rates and transition of reporting processes. The 6-year game-loss injury rate in preseason and regular season games was 6.9 (95% CI 6.0, 8.0) and 6.2 (95% CI 6.0, 6.5) injuries per 10,000 player-minutes; the rate in playoff games was lower (P < 0.01) at 2.8 (95% CI 2.2, 3.6). Most (73%) game-loss injuries had acute onset; 44.4% to 52.5% of these involved contact with another player. CONCLUSION: From 2013-2014 through 2018-2019, over one-third of injuries resulted in missed NBA games, with highest rates of game-loss injuries in preseason games and lowest rates in playoff games. Most game-loss injuries had acute onset, and half of those involved contact with another player. CLINICAL RELEVANCE: These findings - through reliable data reporting by team medical staff in an audited system - can guide evidence-based injury reduction strategies and inform player health priorities.

17.
Artigo em Inglês | MEDLINE | ID: mdl-38852703

RESUMO

BACKGROUND: Recovery after anatomic total shoulder arthroplasty (aTSA) and reverse total shoulder arthroplasty (rTSA) has many similarities; however, recently surgeons have suggested patients undergoing rTSA have a less difficult postoperative course with less pain compared with aTSA patients. Given the heightened awareness to postoperative pain control and opioid consumption, as well as the expanding indications for rTSA, we sought to determine the differences in pain and opioid consumption between aTSA and rTSA patients in a 12-week postoperative period. METHODS: A retrospective chart review was performed to identify all patients who underwent a primary aTSA or rTSA from January 2013 to April 2018 at a single institution. Patients with recorded visual analog scale (VAS) and American Shoulder and Elbow Surgeons scores were included for analysis. Revision arthroplasties were excluded. VAS scores were recorded preoperatively and at standard 2-week, 6-week, and 12-week postoperative time points. P values < .05 were considered statistically significant, except where Bonferroni corrections were applied. RESULTS: A total of 690 patients underwent TSA (278 aTSA, 412 rTSA). Preoperatively, aTSA and rTSA patient groups had similar VAS scores (6 vs. 6, P = .38). Postoperatively, the aTSA group had a higher VAS at the 6-week visit, compared with rTSA patients (2.8 vs. 2.2, P = .003). aTSA patients remained on opioids at a higher rate at the 2-week (62.4% vs. 45.6%, P ≤ .001) time period. aTSA patients needed more opioid prescription refills before the 2-week (61.7% vs. 45.5%, P ≤ .001) and 6-week (40.4% vs. 30.7%, P = .01) follow-up visits. CONCLUSIONS: Despite similar preoperative VAS and rates of preoperative opioid use, aTSA patients required more opioid medication refills and remained on opioids for a longer duration in the early postoperative period to achieve similar postoperative pain control as indicated by similar VAS. This study suggests that the recovery from rTSA is less difficult compared with aTSA as indicated by VAS and opioid consumption.

18.
JCO Clin Cancer Inform ; 8: e2400077, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38822755

RESUMO

PURPOSE: Artificial intelligence (AI) models can generate scientific abstracts that are difficult to distinguish from the work of human authors. The use of AI in scientific writing and performance of AI detection tools are poorly characterized. METHODS: We extracted text from published scientific abstracts from the ASCO 2021-2023 Annual Meetings. Likelihood of AI content was evaluated by three detectors: GPTZero, Originality.ai, and Sapling. Optimal thresholds for AI content detection were selected using 100 abstracts from before 2020 as negative controls, and 100 produced by OpenAI's GPT-3 and GPT-4 models as positive controls. Logistic regression was used to evaluate the association of predicted AI content with submission year and abstract characteristics, and adjusted odds ratios (aORs) were computed. RESULTS: Fifteen thousand five hundred and fifty-three abstracts met inclusion criteria. Across detectors, abstracts submitted in 2023 were significantly more likely to contain AI content than those in 2021 (aOR range from 1.79 with Originality to 2.37 with Sapling). Online-only publication and lack of clinical trial number were consistently associated with AI content. With optimal thresholds, 99.5%, 96%, and 97% of GPT-3/4-generated abstracts were identified by GPTZero, Originality, and Sapling respectively, and no sampled abstracts from before 2020 were classified as AI generated by the GPTZero and Originality detectors. Correlation between detectors was low to moderate, with Spearman correlation coefficient ranging from 0.14 for Originality and Sapling to 0.47 for Sapling and GPTZero. CONCLUSION: There is an increasing signal of AI content in ASCO abstracts, coinciding with the growing popularity of generative AI models.


Assuntos
Indexação e Redação de Resumos , Inteligência Artificial , Oncologia , Humanos , Oncologia/métodos
19.
Artigo em Inglês | MEDLINE | ID: mdl-38942227

RESUMO

BACKGROUND: Previous studies have demonstrated the safety and cost-effectiveness of outpatient total shoulder arthroplasty (TSA), with the majority of studies focusing on 90-day outcomes and complications. Patient selection algorithms have helped appropriately choose patients for an outpatient TSA setting. This study aimed to determine the outcomes of TSA between outpatient and inpatient cohorts with at least a 2-year follow-up. METHODS: A retrospective review identified patients older than 18 years who underwent a TSA with a minimum of 2-year follow-up in either an inpatient or outpatient setting. Using a previously published outpatient TSA patient-selection algorithm, patients were allocated into three groups: outpatient, inpatient due to insurance requirements, and inpatient due to not meeting algorithm criteria. Outcomes evaluated included visual analog scale pain, American Shoulder and Elbow Surgeons score, Single Assessment Numeric Evaluation score, range of motion (ROM), strength, complications, readmissions, and reoperations. Analysis was performed between the outpatient and inpatient groups to demonstrate the safety and efficacy of outpatient TSA with midterm follow-up. RESULTS: A total of 779 TSA were included in this study, allocated into the outpatient (N = 108), inpatient due to insurance (N = 349), and inpatient due to algorithm (N = 322). The average age between these groups was significantly different (59.4 ± 7.4, 66.5 ± 7.5, and 72.5 ± 8.7, respectively; P < .0001). All patient groups demonstrated significant improvements in preoperative to final patient-outcomes scores, ROM, and strength. Analysis between cohorts showed similar final follow-up outcome scores, ROM, and strength, with few significant differences that are likely not clinically different, regardless of surgical location, insurance status, or meeting patient-selection algorithm. Complications, reoperations, and readmissions between all three groups were not significantly different. CONCLUSION: This study reaffirms prior short-term follow-up literature. Transitioning appropriate patients to outpatient TSA results in similar outcomes and complications compared to inpatient cohorts with midterm follow-up.

20.
NPJ Digit Med ; 7(1): 150, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38902390

RESUMO

Sleep monitoring has become widespread with the rise of affordable wearable devices. However, converting sleep data into actionable change remains challenging as diverse factors can cause combinations of sleep parameters to differ both between people and within people over time. Researchers have attempted to combine sleep parameters to improve detecting similarities between nights of sleep. The cluster of similar combinations of sleep parameters from a night of sleep defines that night's sleep phenotype. To date, quantitative models of sleep phenotype made from data collected from large populations have used cross-sectional data, which preclude longitudinal analyses that could better quantify differences within individuals over time. In analyses reported here, we used five million nights of wearable sleep data to test (a) whether an individual's sleep phenotype changes over time and (b) whether these changes elucidate new information about acute periods of illness (e.g., flu, fever, COVID-19). We found evidence for 13 sleep phenotypes associated with sleep quality and that individuals transition between these phenotypes over time. Patterns of transitions significantly differ (i) between individuals (with vs. without a chronic health condition; chi-square test; p-value < 1e-100) and (ii) within individuals over time (before vs. during an acute condition; Chi-Square test; p-value < 1e-100). Finally, we found that the patterns of transitions carried more information about chronic and acute health conditions than did phenotype membership alone (longitudinal analyses yielded 2-10× as much information as cross-sectional analyses). These results support the use of temporal dynamics in the future development of longitudinal sleep analyses.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA