RESUMO
BACKGROUND: Low socioeconomic status has been shown to contribute to poor outcomes in patients undergoing joint replacement surgery. However, there is a paucity of studies investigating shoulder arthroplasty. The purpose of this study was to evaluate the effect of socioeconomic status on baseline and postoperative outcome scores and implant survivorship after anatomic and reverse primary total shoulder arthroplasty (TSA). METHODS: A retrospective review of a prospectively collected single-institution database was performed to identify patients who underwent primary TSA. Zip codes were collected and converted to Area Deprivation Index (ADI) scores. We performed a correlation analysis between national ADI scores and preoperative, postoperative, and preoperative to postoperative improvement in range of motion (ROM), shoulder strength, and functional outcome scores in patients with minimum 2-year follow-up. Patients were additionally grouped into groups according to their national ADI. Achievement of the minimum clinically important difference (MCID), substantial clinical benefit (SCB), and patient acceptable symptom state (PASS) and revision-free survivorship were compared between groups. RESULTS: A total of 1148 procedures including 415 anatomic and 733 reverse total shoulder arthroplasties with a mean age of 64 ± 8.2 and 69.9 ± 8.0 years, respectively, were included. The mean follow-up was 6.3 ± 3.6 years for anatomic and 4.9 ± 2.7 years for reverse total shoulder arthroplasty. We identified a weak negative correlation between national ADI and most functional outcome scores and ROM preoperatively (R range 0.07-0.16), postoperatively (R range 0.09-0.14), and preoperative to postoperative improvement (R range 0.01-0.17). Thus, greater area deprivation was weakly associated with poorer function preoperatively, poorer final outcomes, and poorer improvement in outcomes. There was no difference in the proportion of each ADI group achieving MCID, SCB, and PASS in the anatomic total shoulder arthroplasty cohort. However, in the reverse total shoulder arthroplasty cohort, the proportion of patients achieving MCID, SCB, and PASS decreased with greater deprivation. There was no difference in survivorship between ADI groups. CONCLUSIONS: We found a negative effect of low socioeconomic status on baseline and postoperative patient outcomes and ROM; however, the correlations were relatively weak. Patients that reside in socioeconomically deprived areas have poorer functional outcomes before and after TSA and achieve less improvement from surgery. We should strive to identify modifiable factors to improve the success of TSA in socioeconomically deprived areas.
RESUMO
PURPOSE: This systematic review and meta-analysis compared clinical outcome measures in patients undergoing reverse shoulder arthroplasty (RSA) for proximal humerus fracture (PHF) with healed versus non-healed greater tuberosity (GT). METHODS: We performed a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines querying PubMed/MEDLINE, EMBASE, Web of Science, and Cochrane for studies that stratified results by the GT healing status. Studies that did not attempt to repair the GT were excluded. We extracted and compared clinical outcomes including postoperative forward flexion (FF), external rotation (ER), internal rotation (IR), Constant score, and complications and revision rates. RESULTS: Of the included patients, 295 (78.5%) demonstrated GT healing while 81 did not (21.5%). The healed GT cohort exhibited increased postoperative FF (P < .001), ER (P < .001), IR (P = .006), and Constant score (P = .006) compared to the non-healed GT cohort. The overall dislocation rate was 0.8% with no study differentiating GT status of dislocation cases. CONCLUSION: Healing of the GT after RSA for PHF yields improved postoperative range of motion and strength, whereas patient-reported pain and function were largely not affected by GT healing indicating merit to RSA for PHF regardless of the likelihood of the GT healing.
Assuntos
Artroplastia do Ombro , Amplitude de Movimento Articular , Fraturas do Ombro , Humanos , Artroplastia do Ombro/métodos , Artroplastia do Ombro/efeitos adversos , Recuperação de Função Fisiológica , Fraturas do Ombro/cirurgia , Articulação do Ombro/cirurgia , Articulação do Ombro/fisiopatologia , Resultado do TratamentoRESUMO
Introduction Pituitary neuroendocrine tumors (PitNETs) are rare skull base tumors which can impart significant disability owing to their locally invasive potential. To date, the gamut of PitNET subtypes remains ill-understood at the ligand-receptor (LR) interactome level, potentially limiting therapeutic options. Here, we present findings from in silico analysis of LR complexes formed by PitNETs with clinical presentations of acromegaly, Cushing's disease, high prolactin production, and without symptoms of hormone hypersecretion. Methods Previously published PitNET gene expression data was acquired from ArrayExpress. These data represented all secretion types. LR interactions were analyzed via a crosstalk score approach. Results Cortisol (CORT) ligand was significantly involved in tumor-to-tumor signaling across all PitNET subtypes but prolactinomas, which evidenced active CORT depletion. Likewise, CCL25 ligand was implicated in 20% of the top LR complex interactions along the tumor-to-stroma signaling axis, but silent PitNETs reported unique depletion of the CCL25 ligand. Along the stroma-to-tumor signaling axis, all clinical PitNET subtypes enriched stromal vasoactive intestinal polypeptide ligand interactions with tumor secretin receptor. All clinical PitNET subtypes enriched stromal DEFB103B (human ß-defensin 103B) ligand interactions with stromal chemokine receptors along the stroma-to-stroma signaling axis. In PitNETs causing Cushing's disease, immune checkpoint ligand CD274 reported high stromal expression, and prolactinomas reported low stromal expression. Moreover, prolactinomas evidenced distinctly high stromal expression of immune-exhausted T cell response marker IL10RA compared with other clinical subtypes. Conclusion Relative crosstalk score analysis revealed a great diversity of LR complex interactions across clinical PitNET subtypes and between solid tumor compartments. More data are needed to validate these findings and exact clinical importance.
RESUMO
Polycystic ovary syndrome (PCOS) is a common endocrine disorder that disrupts reproductive function and hormonal balance. It primarily affects reproductive-aged women and leads to physical, metabolic, and emotional challenges affecting the quality of life. In this study, we develop a machine learning-based model to accurately identify PCOS pelvic ultrasound images from normal pelvic ultrasound images. By leveraging 1,932 pelvic ultrasound images from the Kaggle online platform (Google LLC, Mountain View, CA), we were able to create a model that accurately detected multiple small follicles in the ovaries and an increase in ovarian volume for PCOS pelvic ultrasound images from normal pelvic ultrasound images. Our developed model demonstrated a promising performance, achieving a precision value of 82.6% and a recall value of 100%, including a sensitivity and specificity of 100% each. The value of the overall accuracy proved to be 100% and the F1 score was calculated to be 0.905. As the results garnered from our study are promising, further validation studies are necessary to generalize the model's capabilities and incorporate other diagnostic factors of PCOS such as physical exams and lab values.
RESUMO
Background: The downstream regional effect of the Comprehensive Care for Joint Replacement (CJR) program on care pathway-adjacent patients, including revision arthroplasty patients, is poorly understood. Prior studies have demonstrated that care pathways targeting primary total joint arthroplasty may produce a halo effect, impacting more complex patients with parallel care pathways. However, neither the effect of regional referral changes from CJR nor the durability of these positive changes with prolonged bundle participation has been assessed. Methods: Blinded data were pulled from electronic medical records. Primary analyses focused on the effect of CJR participation from 2015 (baseline) to 2020 (final participation year) at a tertiary care safety-net hospital. Patient demographics were evaluated using multivariate analysis of variance and chi-square calculations between procedure types over time. Results: Patients who underwent revision total knee arthroplasty (N = 376) and revision total hip arthroplasty (N = 482) were included. More patients moved through the revision-care pathway over the participation period, with volume increasing by 42% over time. Patients became more medically complex: the Charlson comorbidity index increased from 3.91 to 4.65 (P = .01). The mean length of stay decreased from 5.14 days to 4.50 days (P = .03), but the all-cause complication (8.3%-15.2%; P = .02) and readmission rates (13.6%-16.6%; P = .19) increased over time. Conclusions: Despite care pathway improvements over 5 years of CJR participation, revision patients did not display clear benefits in quality metrics but demonstrated a considerable increase in volume and medical complexity over time. The care of these patients may supersede even thoughtfully implemented care pathways, especially when referral burden increases, as may be prone to happen in regional, financial risk-conferring value-based programs. Understanding the impact of mandatory bundled payment programs like CJR on the care of arthroplasty patients regionally will be essential as value-based programs evolve.
RESUMO
This review provides an in-depth analysis of the effect of length of stay (LOS), comorbidities, and procedural complications on the cost-effectiveness of transcatheter aortic valve replacement (TAVR) in comparison to surgical aortic valve replacement (SAVR). We found that the average LOS was shorter for patients undergoing TAVR, contributing to lower average costs associated with the procedure, although the LOS varied between patients due to the severity of illness and comorbidities present. TAVR has also been found to improve the quality of life for patients receiving aortic valve replacement compared to SAVR. Although TAVR has a lower rate of most post-operative complications caused by SAVR, such as bleeding and cardiac complications, TAVR shows an increased rate of permanent pacemaker (PPM) implantation due to mechanical trauma on the heart's conduction system. In addition, our findings suggest that the cost-effectiveness of each procedure varies based on the types of valve, the patient history of other medical conditions, and the procedural methods. Our findings show that TAVR is preferred over SAVR in terms of cost-effectiveness across a variety of patients with other coexisting medical conditions, including cancer, advanced kidney disease, cirrhosis, diabetes mellitus, and bundle branch block. TAVR also appears to be superior to SAVR with fewer post-operative complications. However, TAVR appears to have a higher rate of PPM implantation rates as compared to SAVR. The comorbidities of the valve recipient must be considered when deciding whether to use TAVR or SAVR as cost-effectiveness varies with the patient background.
RESUMO
This investigation explores the potential efficacy of machine learning algorithms (MLAs), particularly convolutional neural networks (CNNs), in distinguishing between benign and malignant breast cancer tissue through the analysis of 1000 breast cancer images gathered from Kaggle.com, a domain of publicly accessible data. The dataset was meticulously partitioned into training, validation, and testing sets to facilitate model development and evaluation. Our results reveal promising outcomes, with the developed model achieving notable precision (92%), recall (92%), accuracy (92%), sensitivity (89%), specificity (96%), an F1 score of 0.92, and an area under the curve (AUC) of 0.944. These metrics underscore the model's ability to accurately identify malignant breast cancer images. Because of limitations such as sample size and potential variations in image quality, further research, data collection, and integration of theoretical models in a real-world clinical setting are needed to expand the reliability and generalizability of these MLAs. Nonetheless, this study serves to highlight the potential use of artificial intelligence models as supporting tools for physicians to utilize in breast cancer detection.
RESUMO
Antiarrhythmic drugs play a pivotal role in managing and preventing arrhythmias. Amiodarone, classified as a class III antiarrhythmic, has been used prophylactically to effectively prevent atrial fibrillation postoperatively in cardiac surgeries. However, there is a lack of consensus on the use of amiodarone and other antiarrhythmic drugs as prophylaxis to reduce the occurrence of all types of postoperative arrhythmias in cardiac and non-cardiac surgeries. A comprehensive PubMed query yielded 614 relevant papers, of which 52 clinical trials were analyzed. The data collection included the class of antiarrhythmics, timing or method of drug administration, surgery type, type of arrhythmia and its incidence, and hospitalization length. Statistical analyses focused on prophylactic antiarrhythmics and their respective reductions in postoperative arrhythmias and hospitalization length. Prophylactic amiodarone alone compared to placebo demonstrated a significant reduction in postoperative arrhythmia incidence in cardiac and non-cardiac surgeries (24.01%, p<0.0001), and it was the only treatment group to significantly reduce hospitalization length versus placebo (p = 0.0441). Prophylactic use of class 4 antiarrhythmics versus placebo also demonstrated a significant reduction in postoperative arrhythmia incidence (28.01%, p<0.0001), and while there was no significant statistical reduction compared to amiodarone (4%, p=0.9941), a lack of abundant data provides a case for further research on the prophylactic use of class 4 antiarrhythmics for this indication. Amiodarone prophylaxis remains a prime cornerstone of therapy in reducing postoperative arrhythmia incidence and hospitalization length. Emerging data suggests a need for a broader exploration of alternative antiarrhythmic agents and combination therapies, particularly class 4 antiarrhythmics, in both cardiac and non-cardiac surgeries. This meta-analysis depicts the effectiveness of amiodarone, among other antiarrhythmics, in postoperative arrhythmia incidence and hospitalization length reduction in cardiac and non-cardiac surgeries.
RESUMO
OBJECTIVE: Impella 5.5 (Abiomed, Danvers, MA, USA) is a temporary mechanical circulatory support device used for patients in cardiogenic shock. This review provides a comprehensive overview of the device's clinical effectiveness, safety profile, patient outcomes, and relevant procedural considerations. METHODS: We conducted a systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines using the PubMed/MEDLINE database. The search query included articles available from October 6, 2022, through January 13, 2023. Our initial search identified 75 studies. All records were screened by 2 independent reviewers using the Covidence software for adherence to our inclusion criteria, and 8 retrospective cohort studies were identified as appropriate for inclusion. RESULTS: Across the included studies, the sample size ranged from 4 to 275, with predominantly male cohorts. Indications for Impella support varied, and the duration of support ranged from 9.8 to 70 days. Overall, Impella support appeared to be associated with favorable survival rates and manageable complications in various patient populations. Complications associated with Impella use included bleeding, stroke, and device malfunctions. Two studies compared prolonged and Food and Drug Administration-approved Impella support, showing similar outcomes and adverse events. CONCLUSIONS: Impella 5.5 continues to be an attractive option for bridging patients to definitive therapy. Survival during and after Impella 5.5 was favorable for patients regardless of initial indication. However, device use was associated with several important complications, which calls for judicious use and a precontemplated exit strategy. Limitations of this literature review include biases inherent to the retrospective studies included, such as selection and publication bias.
Assuntos
Coração Auxiliar , Choque Cardiogênico , Humanos , Choque Cardiogênico/terapia , Resultado do Tratamento , Masculino , FemininoRESUMO
Aim-: In this study, we present a broad presentation of the current state of cerebral vasospasm, including its pathogenesis, commonly used treatments, and future outlook. Methods-: A literature review was conducted for cerebral vasospasms using the PubMed journal database (https://pubmed.ncbi.nlm.nih.gov). Relevant journal articles were narrowed down and selected using the Medical Subject Headings (MeSH) option in PubMed. Results-: Cerebral vasospasm is the persistent narrowing of cerebral arteries days after experiencing a subarachnoid hemorrhage (SAH). Eventually, if not corrected, this can lead to cerebral ischemia with significant neurological deficits and/or death. Therefore, it is clinically beneficial to diminish or prevent the occurrence or reoccurrence of vasospasm in patients following a SAH to prevent unwanted comorbidities or fatalities. We discuss the pathogenesis and mechanism of development that have been implicated in the progression of vasospasms as well as the manner in which clinical outcomes are quantitively measured. Further, we mention and highlight commonly used treatments to inhibit and reverse the course of vasoconstriction within the cerebral arteries. Additionally, we mention innovations and techniques that are being used to treat vasospasms and the outlook of their therapeutic value. Conclusion-: Overall, we give a comprehensive summary of the disease that encapsulates cerebral vasospasm and the current and future standards of care that are used to treat it.
RESUMO
Osteosarcoma (OS) is a debilitating cancer of the bone that commonly afflicts the young and old. This may be de novo or associated with tumorigenic syndromes. However, many molecular mechanisms are still being uncovered and may offer greater avenues for screening and therapy. Cadherins, including E-cadherin and N-cadherin/vimentin, are involved in epithelial-to-mesenchymal transmission (EMT), which is key for tumor invasion. A study reviewing the relationship between OS and cadherins might elucidate a potential target for therapy and screening. A robust literature review was conducted by searching PubMed with the keywords "osteosarcoma", "cadherin", "e-cadherin" and "n-cadherin". Of a preliminary 266 papers, 25 were included in the final review. Review articles and those without primary data were excluded. Loss of E-cadherin is noted in metastatic cell lines of osteosarcoma. Overexpression of E-cadherin or knockout of N-cadherin/vimentin results in loss of metastatic potential. There are several methods of gene knockout, including CRISPR-Cas9 gene editing, viral vector insertion with micro RNA complementary to long noncoding RNA within gene segments, or proteomic editing. Screening for EMT and genetic treatment of EMT is a possible avenue for the treatment of refractory osteosarcoma. Several studies were conducted ex vivo. Further testing involving in vitro therapy is necessary to validate these methods. Limitations of this study involve a lack of in vivo trials to validate methods.
RESUMO
Background: Traumatic brain injuries (TBIs) are associated with high mortality and morbidity. Depressed skull fractures (DSFs) are a subset of fractures characterized by either direct or indirect brain damage, compressing brain tissue. Recent advances in implant use during primary reconstruction surgeries have shown to be effective. In this systematic review, we assess differences in titanium mesh, polyetheretherketone (PEEK) implants, autologous pericranial grafts, and methyl methacrylate (PMMA) implants for DSF treatment. Methods: A literature search was conducted in PubMed, Scopus, and Web of Science from their inception to September 2022 to retrieve articles regarding the use of various implant materials for depressed skull fractures. Inclusion criteria included studies specifically describing implant type/material within treatment of depressed skull fractures, particularly during duraplasty. Exclusion criteria were studies reporting only non-primary data, those insufficiently disaggregated to extract implant type, those describing treatment of pathologies other than depressed skull fractures, and non-English or cadaveric studies. The Newcastle-Ottawa Scale was utilized to assess for presence of bias in included studies. Results: Following final study selection, 18 articles were included for quantitative and qualitative analysis. Of the 177 patients (152 males), mean age was 30.8 years with 82% implanted with autologous graft material, and 18% with non-autologous material. Data were pooled and analyzed with respect to the total patient set, and additionally stratified into those treated through autologous and non-autologous implant material.There were no differences between the two cohorts regarding mean time to encounter, pre-operative Glasgow coma scale (GCS), fracture location, length to cranioplasty, and complication rate. There were statistically significant differences in post-operative GCS (p < 0.0001), LOS (p = 0.0274), and minimum follow-up time (p = 0.000796). Conclusion: Differences in measurable post-operative outcomes between implant groups were largely minimal or none. Future research should aim to probe these basic results deeper with a larger, non-biased sample.
RESUMO
Clinicians have managed and treated lower back pain since the earliest days of practice. Historically, lower back pain and its accompanying symptoms of radiating leg pain and muscle weakness have been recognized to be due to any of the various lumbar spine pathologies that lead to the compression of the lumbar nerves at the root, the most common of which is the radiculopathy known as sciatica. More recently, however, with the increased rise in chronic diseases, the importance of differentially diagnosing a similarly presenting pathology, known as lumbosacral plexopathy, cannot be understated. Given the similar clinical presentation of lumbar spine pathologies and lumbosacral plexopathies, it can be difficult to differentiate these two diagnoses in the clinical setting. Resultingly, the inappropriate diagnosis of either pathology can result in ineffective clinical management. Thus, this review aims to aid in the clinical differentiation between lumbar spine pathology and lumbosacral plexopathy. Specifically, this paper delves into spine and plexus anatomy, delineates the clinical assessment of both pathologies, and highlights powerful diagnostic tools in the hopes of bolstering appropriate diagnosis and treatment. Lastly, this review will describe emerging treatment options for both pathologies in the preclinical and clinical realms, with a special emphasis on regenerative nerve therapies.
RESUMO
Background Tuberculosis (TB) is an infectious disease caused by the bacterium Mycobacterium tuberculosis. It primarily affects the lungs but can also affect other organs, such as the kidneys, bones, and brain. TB is transmitted through the air when an infected individual coughs, sneezes, or speaks, releasing tiny droplets containing the bacteria. Despite significant efforts to combat TB, challenges such as drug resistance, co-infection with human immunodeficiency virus (HIV), and limited resources in high-burden settings continue to pose obstacles to its eradication. TB remains a significant global health challenge, necessitating accurate and timely detection for effective management. Methods This study aimed to develop a TB detection model using chest X-ray images obtained from Kaggle.com, utilizing Google's Collaboration Platform. Over 1196 chest X-ray images, comprising both TB-positive and normal cases, were employed for model development. The model was trained to recognize patterns within the TB chest X-rays to efficiently recognize TB within patients in order to be treated in time. Results The model achieved an average precision of 0.934, with precision and recall values of 94.1% each, indicating its high accuracy in classifying TB-positive and normal cases. Sensitivity and specificity values were calculated as 96.85% and 91.49%, respectively. The F1 score was also calculated to be 0.941. The overall accuracy of the model was found to be 94%. Conclusion These results highlight the potential of machine learning algorithms for TB detection using chest X-ray images. Further validation studies and research efforts are needed to assess the model's generalizability and integration into clinical practice, ultimately facilitating early detection and improved management of TB.
RESUMO
Procedures for neurological disorders such as Parkinsons Disease (PD), Essential Tremor (ET), Obsessive Compulsive Disorder (OCD), Tourette's Syndrome (TS), and Major Depressive Disorder (MDD) tend to overlap. Common therapeutic procedures include deep brain stimulation (DBS), lesioning, and focused ultrasound (FUS). There has been significant change and innovation regarding targeting mechanisms and new advancements in this field allowing for better clinical outcomes in patients with severe cases of these conditions. In this review, we discuss advancements and recent discoveries regarding these three procedures and how they have led to changes in utilization in certain conditions. We further discuss the advantages and drawbacks of these treatments in certain conditions and the emerging advancements in brain-computer interface (BCI) and its utility as a therapeutic for neurological disorders.
RESUMO
Introduction Pancreatic cancer resections comprise a class of complex surgical operations with a high postoperative morbidity rate. Due to the complicated nature of pancreatic resection, individuals who undergo this procedure are advised to visit a high-volume medical center that performs such pancreatic surgeries frequently. However, this specialized treatment option may not be available for uninsured patients or patients with other socioeconomic limitations that may restrict their access to these facilities. To gain a better understanding of the impact of healthcare disparities on surgical outcomes, we aimed to explore if there were significant differences in mortality rate post-pancreatic resection between high- and low-volume hospitals within San Bernardino, Riverside, Los Angeles, and Orange Counties. Methods We utilized the California Health and Human Services Agency (CHHS) California Hospital Inpatient Mortality Rates and Quality Ratings public dataset to compare risk-adjusted mortality rates (RA-MR) of pancreatic cancer resections procedures. We focused on procedures performed in hospitals within San Bernardino, Riverside, Los Angeles, and Orange County from 2012 to 2015. To assess post-resection outcomes in relation to hospital volume, we utilized an independent T-test (significance level was set equal to 0.05) to determine if there is a statistically significant difference in RA-MR after pancreatic resection between high- and low-volume hospitals. Results During the 2012-2015 study period, 57 hospitals across San Bernardino, Riverside, Orange, and Los Angeles Counties were identified to perform a total of 6,204 pancreatic resection procedures. The low-volume hospital group (N=2,539) was associated with a higher RA-MR of M=4.45 (SD=11.86). By comparison, the high-volume hospital group (N=3,665) was associated with a lower RA-MR of M=1.72 (SD=2.61). Conclusion Pancreatic resection surgeries performed at low-volume hospitals resulted in a significantly higher RA-MR compared to procedures done at high-volume hospitals in California.
RESUMO
This study explores the application of machine learning and deep learning algorithms to facilitate the accurate diagnosis of melanoma, a type of malignant skin cancer, and benign nevi. Leveraging a dataset of 793 dermatological images from the Kaggle online platform (Google LLC, Mountain View, California, United States), we developed a model that can accurately differentiate between these lesions based on their distinctive features. The dataset was divided into training (80%), validation (10%), and testing (10%) sets to optimize model performance and ensure its generalizability. Our findings demonstrate the potential of machine learning algorithms in enhancing the efficiency and accuracy of melanoma and nevi detection, with the developed model exhibiting robust performance metrics. Nonetheless, limitations exist due to the potential lack of comprehensive representation of melanoma and nevi cases in the dataset, and variations in image quality and acquisition methods, which may influence the model's performance in real-world clinical settings. Therefore, further research, validation studies, and integration into clinical practice are necessary to ensure the reliability and generalizability of these models. This study underscores the promise of artificial intelligence in advancing dermatologic diagnostics, aiming to improve patient outcomes by supporting early detection and treatment initiation for melanoma.
RESUMO
Age-related macular degeneration (AMD) is a disease that worsens the central vision of numerous individuals across the globe. Ensuring that patients are diagnosed accurately and that their symptoms are carefully monitored is essential to ensure that adequate care is delivered. To accomplish this objective, retinal imaging technology is necessary to assess the pathophysiology that is required to give an accurate diagnosis of AMD. The purpose of this review is to assess the ability of various retinal imaging technologies such as optical coherence tomography (OCT), color fundus retinal photography, fluorescein angiography, and fundus photography. The statistical methods that were conducted yielded results that suggested that using OCT in conjunction with other imaging technologies results in a higher detection of symptoms among patients that have AMD. Further investigation should be conducted to ascertain the validity of the conclusions that were stated within the review.
RESUMO
INTRODUCTION: Medical Students applying to neurosurgery residency programs incur substantial costs associated with interviews, away rotations, and application fees. However, few studies have compared expenses prior to and during the COVID-19 pandemic. This study evaluates the financial impact of COVID-19 on the neurosurgery residency application and identifies strategies that may alleviate the financial burden of prospective neurosurgery residents. METHODS: The TEXAS STAR database was surveyed for applicants of neurosurgical residency programs during the COVID-19 pandemic (2021) and post-pandemic (2022). 66 applicants for the 2021 application cycle and 50 applicants for the 2022 application cycle completed the survey. We compared application fees, away rotations cost, interview cost, and total expenses as reported by the neurosurgery applicants of the 2021 and 2022 application cycle. A Shapiro-Wilk test was used to test for data normality, and a Mann-Whitney U-Test was used to compare costs during the 2021 and 2022 neurosurgery application cycle. RESULTS: There was a statistically significant reduction in total expenses in 2021 vs 2022 ($3,934 vs $9,860). Interview and away rotation expenses decreased in 2021 vs 2022 (interview expenses $786 vs $4511, away rotation $1,083 vs $3,000, p < 0.001). Application fee expenses were not different between 2021 and 2022. The greatest reduction in application cost ($11,908) was seen in the South for 2021. CONCLUSIONS: The COVID-19 pandemic significantly reduced total fees associated with the neurosurgical residency application. Virtual platforms in place of in-person interviews could lessen the financial burden on applicants and alleviate socioeconomic barriers in the neurosurgical application process after COVID-19.
Assuntos
COVID-19 , Internato e Residência , Estudantes de Medicina , Humanos , Pandemias , Estudos Prospectivos , Custos e Análise de Custo , COVID-19/epidemiologiaRESUMO
Cervical spine fractures represent a significant healthcare challenge, necessitating accurate detection for appropriate management and improved patient outcomes. This study aims to develop a machine learning-based model utilizing a computed tomography (CT) image dataset to detect and classify cervical spine fractures. Leveraging a large dataset of 4,050 CT images obtained from the Radiological Society of North America (RSNA) Cervical Spine Fracture dataset, we evaluate the potential of machine learning and deep learning algorithms in achieving accurate and reliable cervical spine fracture detection. The model demonstrates outstanding performance, achieving an average precision of 1 and 100% precision, recall, sensitivity, specificity, and accuracy values. These exceptional results highlight the potential of machine learning algorithms to enhance clinical decision-making and facilitate prompt treatment initiation for cervical spine fractures. However, further research and validation efforts are warranted to assess the model's generalizability across diverse populations and real-world clinical settings, ultimately contributing to improved patient outcomes in cervical spine fracture cases.