RESUMO
Colorectal cancer (CRC) represents the second leading cause of cancer incidence and the third leading cause of cancer deaths worldwide. There is currently a lack of understanding of the onset of CRC, hindering the development of effective prevention strategies, early detection methods and the selection of appropriate therapies. This article outlines the key aspects of host genetics currently known about the origin and development of CRC. The organisation of the colonic crypts is described. It discusses how the transformation of a normal cell to a cancer cell occurs and how that malignant cell can populate an entire colonic crypt, promoting colorectal carcinogenesis. Current knowledge about the cell of origin of CRC is discussed, and the two morphological pathways that can give rise to CRC, the classical and alternative pathways, are presented. Due to the molecular heterogeneity of CRC, each of these pathways has been associated with different molecular mechanisms, including chromosomal and microsatellite genetic instability, as well as the CpG island methylator phenotype. Finally, different CRC classification systems are described based on genetic, epigenetic and transcriptomic alterations, allowing diagnosis and treatment personalisation.
Assuntos
Neoplasias Colorretais , Neoplasias Colorretais/genética , Humanos , Metilação de DNA/genética , Epigênese Genética , Instabilidade de Microssatélites , Transformação Celular Neoplásica/genética , Ilhas de CpG/genética , Predisposição Genética para Doença/genéticaRESUMO
PURPOSE: This study aims to (1) devise a classification system to categorize and manage ballistic fractures of the knee, hip, and shoulder; (2) assess the reliability of this classification compared to current classification schemas; and (3) determine the association of this classification with surgical management. METHODS: We performed a retrospective review of a prospectively collected trauma database at an urban level 1 trauma centre. The study included 147 patients with 169 articular fractures caused by ballistic trauma to the knee, hip, and shoulder. Injuries were selected based on radiographic criteria from plain radiographs and CT scans. The AO/OTA classification system's reliability was compared to that of the novel ballistic articular injury classification system (BASIC), developed using a nominal group approach. The BASIC system's ability to guide surgical decision-making, aiming to achieve stable fixation and minimize post-traumatic arthritis, was also evaluated. RESULTS: The BASIC system was created after analysing 73 knee, 62 hip, and 34 shoulder fractures. CT scans were used in 88% of cases, with 44% of patients receiving surgery. The BASIC classification comprises five subgroups, with a plus sign indicating the need for soft tissue intervention. Interrater reliability showed fair agreement for AO/OTA (k = 0.373) and moderate agreement for BASIC (k = 0.444). The BASIC system correlated strongly with surgical decisions, with an 83% concurrence in treatment choices based on chart reviews. CONCLUSIONS: Conventional classification systems provide limited guidance for ballistic articular injuries. The BASIC system offers a pragmatic and reproducible alternative, with potential to inform treatment decisions for knee, hip, and shoulder ballistic injuries. Further research is needed to validate this system and its correlation with patient outcomes. LEVEL OF EVIDENCE: Level III, Diagnostic Study.
Assuntos
Tomografia Computadorizada por Raios X , Humanos , Estudos Retrospectivos , Masculino , Adulto , Feminino , Tomografia Computadorizada por Raios X/métodos , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Traumatismos do Joelho/diagnóstico por imagem , Traumatismos do Joelho/classificação , Traumatismos do Joelho/cirurgia , Fraturas do Ombro/classificação , Fraturas do Ombro/diagnóstico por imagem , Fraturas do Ombro/cirurgia , Ferimentos por Arma de Fogo/diagnóstico por imagem , Ferimentos por Arma de Fogo/classificação , Ferimentos por Arma de Fogo/cirurgia , Adulto Jovem , Idoso , Adolescente , Lesões do Ombro/diagnóstico por imagem , Fraturas Intra-Articulares/classificação , Fraturas Intra-Articulares/diagnóstico por imagem , Fraturas Intra-Articulares/cirurgiaRESUMO
AIM: Describe the activity of hospital emergency departments (EDs) and the sociodemographic profile of patients in the eight public hospitals in Spain, according to the different triage levels, and to analyse the impact of the SARS-CoV-2 pandemic on patient flow. DESIGN: An observational, descriptive, cross-sectional and retrospective study was carried out. METHODS: Three high-tech public hospitals and five low-tech hospitals consecutively included 2,332,654 adult patients seen in hospital EDs from January 2018 to December 2021. Hospitals belonging to the Catalan Institute of Health. The main variable was triage level, classified according to a standard for the Spanish structured triage system known as Sistema Español de Triaje. For each of the five triage levels, a negative binomial regression model adjusted for year and hospital was performed. The analysis was performed with the R 4.2.2 software. RESULTS: The mean age was 55.4 years. 51.4% were women. The distribution of patients according to the five triage levels was: level 1, 0.41% (n = 9565); level 2, 6.10% (n = 142,187); level 3, 40.2% (n = 938,203); level 4, 42.6% (n = 994,281); level 5, 10.6% (n = 248,418). The sociodemographic profile was similar in terms of gender and age: as the level of severity decreased, the number of women, mostly young, increased. In the period 2020-2021, the emergency rate increased for levels 1, 2 and 3, but levels 4 and 5 remained stable. CONCLUSION: More than half of the patients attended in high-technology hospital EDs were of low severity. The profile of these patients was that of a young, middle-aged population, mostly female. The SARS-CoV2 pandemic did not change this pattern, but an increase in the level of severity was observed. IMPACT: What problem did the study address? There is overcrowding in hospital EDs. What were the main findings? This study found that more than half of the patients attended in high-technology hospital EDs in Spain have low or very low levels of severity. Young, middle-aged women were more likely to visit EDs with low levels of severity. The SARS-CoV2 pandemic did not change this pattern, but an increase in severity was observed. Where and on whom will the research have an impact? The research will have an impact on the functioning of hospital EDs and their staff. PATIENT OR PUBLIC CONTRIBUTION: Not applicable.
RESUMO
INTRODUCTION: To develop and validate a support tool for healthcare providers, enabling them to make precise and critical decisions regarding intensive care unit (ICU) admissions for high-risk pregnant women, thus enhancing maternal outcomes. METHODS: This retrospective study involves secondary data analysis of information gathered from 9550 pregnant women, who had severe maternal morbidity (any unexpected complication during labor and delivery that leads to substantial short-term or long-term health issues for the mother), collected between 2009 and 2010 from the Brazilian Network for Surveillance of Severe Maternal Morbidity, encompassing 27 obstetric reference centers in Brazil. Machine-learning models, including decision trees, Random Forest, Gradient Boosting Machine (GBM), and Extreme Gradient Boosting (XGBoost), were employed to create a risk prediction tool for ICU admission. Subsequently, sensitivity analysis was conducted to compare the accuracy, predictive power, sensitivity, and specificity of these models, with differences analyzed using the Wilcoxon test. RESULTS: The XGBoost algorithm demonstrated superior efficiency, achieving an accuracy rate of 85%, sensitivity of 42%, specificity of 97%, and an area under the receiver operating characteristic curve of 86.7%. Notably, the estimated prevalence of ICU utilization by the model (11.6%) differed from the prevalence of ICU use from the study (21.52%). CONCLUSION: The developed risk engine yielded positive results, emphasizing the need to optimize intensive care bed utilization and objectively identify high-risk pregnant women requiring these services. This approach promises to enhance the effective and efficient management of pregnant women, particularly in resource-constrained regions worldwide. By streamlining ICU admissions for high-risk cases, healthcare providers can better allocate critical resources, ultimately contributing to improved maternal health outcomes.
RESUMO
The Banff 2022 consensus introduced probable antibody-mediated rejection (AMR), characterized by mild AMR histologic features and human leukocyte antigen (HLA) donor-specific antibody (DSA) positivity. In a single-center observational cohort study of 1891 kidney transplant recipients transplanted between 2004 and 2021, 566 kidney biopsies were performed in 178 individual HLA-DSA-positive transplants. Evaluated at time of the first HLA-DSA-positive biopsy of each transplant (N = 178), 84 of the 178 (47.2%) of first biopsies were scored as no AMR, 22 of the 178 (12.4%) as probable AMR, and 72 of the 178 (40.4%) as AMR. The majority (77.3%) of probable AMR cases were first diagnosed in indication biopsies. Probable AMR was associated with lower estimated glomerular filtration rate (mL/min/1.73m2) than no AMR (20.2 [8.3-32.3] vs 40.1 [25.4-53.3]; P = .001). The one-year risk of (repeat) AMR was similar for probable AMR and AMR (subdistribution hazard ratio (sHR), 0.99; 0.42-2.31; P = .97) and higher than after no AMR (sHR, 3.05; 1.07-8.73; P = .04). Probable AMR had a higher five-year risk of transplant glomerulopathy vs no AMR (sHR, 4.29; 0.92-19.98; P = 06), similar to AMR (sHR, 1.74; 0.43-7.04; P = .44). No significant differences in five-year risk of graft failure emerged between probable AMR and AMR (sHR, 1.14; 0.36-3.58; P = .82) or no AMR (sHR, 2.46; 0.78-7.74; P = .12). Probable AMR is a rare phenotype, however, sharing significant similarities with AMR in this single-center study. Future studies are needed to validate reproducible diagnostic criteria and associated clinical outcomes to allow for defining best management of this potentially relevant phenotype.
RESUMO
BACKGROUND: Various risk classification systems (RCSs) are used globally to stratify newly diagnosed patients with prostate cancer (PCa) into prognostic groups. OBJECTIVE: To compare the predictive value of different prognostic subgroups (low-, intermediate-, and high-risk disease) within the RCSs for detecting metastatic disease on prostate-specific membrane antigen (PSMA) positron emission tomography (PET)/computed tomography (CT) for primary staging, and to assess whether further subdivision of subgroups would be beneficial. DESIGN, SETTING, AND PARTICIPANTS: Patients with newly diagnosed PCa, in whom PSMA-PET/CT was performed between 2017 and 2022, were studied retrospectively. Patients were stratified into risk groups based on four RCSs: European Association of Urology, National Comprehensive Cancer Network (NCCN), Cambridge Prognostic Group (CPG), and Cancer of the Prostate Risk Assessment. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: The prevalence of metastatic disease on PSMA-PET/CT was compared among the subgroups within the four RCSs. RESULTS AND LIMITATIONS: In total, 2630 men with newly diagnosed PCa were studied. Any metastatic disease was observed in 35% (931/2630) of patients. Among patients classified as having intermediate- and high-risk disease, the prevalence of metastases ranged from approximately 12% to 46%. Two RCSs further subdivided these groups. According to the NCCN, metastatic disease was observed in 5.8%, 13%, 22%, and 62% for favorable intermediate-, unfavorable intermediate-, high-, and very-high-risk PCa, respectively. Regarding the CPG, these values were 6.9%, 13%, 21%, and 60% for the corresponding risk groups. CONCLUSIONS: This study underlines the importance of nuanced risk stratification, recommending the further subdivision of intermediate- and high-risk disease given the notable variation in the prevalence of metastatic disease. PSMA-PET/CT for primary staging should be reserved for patients with unfavorable intermediate- or higher-risk disease. PATIENT SUMMARY: The use of various risk classification systems in patients with prostate cancer helps identify those at a higher risk of having metastatic disease on prostate-specific membrane antigen positron emission tomography/computed tomography for primary staging.
RESUMO
Major depressive disorder (MDD) is a heterogeneous syndrome, associated with different levels of severity and impairment on the personal functioning for each patient. Classification systems in psychiatry, including ICD-11 and DSM-5, are used by clinicians in order to simplify the complexity of clinical manifestations. In particular, the DSM-5 introduced specifiers, subtypes, severity ratings, and cross-cutting symptom assessments allowing clinicians to better describe the specific clinical features of each patient. However, the use of DSM-5 specifiers for major depressive disorder in ordinary clinical practice is quite heterogeneous. The present study, using a Delphi method, aims to evaluate the consensus of a representative group of expert psychiatrists on a series of statements regarding the clinical utility and relevance of DSM-5 specifiers for major depressive disorder in ordinary clinical practice. Experts reached an almost perfect agreement on statements related to the use and clinical utility of DSM-5 specifiers in ordinary clinical practice. In particular, a complete consensus was found regarding the clinical utility for ordinary clinical practice of using DSM-5 specifiers. The use of specifiers is considered a first step toward a "dimensional" approach to the diagnosis of mental disorders.
Assuntos
Consenso , Técnica Delphi , Transtorno Depressivo Maior , Manual Diagnóstico e Estatístico de Transtornos Mentais , Humanos , Transtorno Depressivo Maior/diagnóstico , Transtorno Depressivo Maior/classificação , Transtorno Depressivo Maior/psicologia , Psiquiatria/normas , Psiquiatria/métodosRESUMO
Over the past quarter-century, the field of evolutionary biology has been transformed by the emergence of complete genome sequences and the conceptual framework known as the 'Net of Life.' This paradigm shift challenges traditional notions of evolution as a tree-like process, emphasizing the complex, interconnected network of gene flow that may blur the boundaries between distinct lineages. In this context, gene loss, rather than horizontal gene transfer, is the primary driver of gene content, with vertical inheritance playing a principal role. The 'Net of Life' not only impacts our understanding of genome evolution but also has profound implications for classification systems, the rapid appearance of new traits, and the spread of diseases. Here, we explore the core tenets of the 'Net of Life' and its implications for genome-scale phylogenetic divergence, providing a comprehensive framework for further investigations in evolutionary biology.
Assuntos
Evolução Molecular , Fluxo Gênico , Genoma , Animais , Humanos , Transferência Genética Horizontal , Genoma/genética , Modelos Genéticos , FilogeniaRESUMO
INTRODUCTION: Knowledge of the morphology of the suprascapular notch is clinically beneficial in patients with suspected suprascapular nerve compression or palsy. Several classification systems have been proposed for the morphological classification of the suprascapular notch and its several anatomical variations. The purpose of this study was to evaluate the inter- and intraobserver reliability of four different classification systems for suprascapular notch typing analysing shoulder computed tomography (CT) scans. METHODS: Shoulder CT scans from 109 subjects (71.5% males) were examined by three raters of various experience levels, one senior, one experienced, and one junior orthopaedic surgeon. The CT scans were evaluated quantitatively and qualitatively and the suprascapular notch was classified according to four classification systems at two separate timepoints, four weeks apart. To determine consistency among the same or different raters, the Kappa statistic was performed and intrarater reliability for each rater between the first and the second evaluation was assessed using Cohen's kappa. Reliability across all raters at each timepoint was assessed using the Fleiss kappa. RESULTS: Agreement was almost perfect for all the classification systems and amongst all raters, regardless of their experience level. There were no significant differences between the raters on any of the evaluations. The overall interobserver agreement for all classifications was almost perfect. CONCLUSION: The four suprascapular notch classification systems are reliable, and the rater's experience level has no impact on the evaluation.
RESUMO
Categorization is ubiquitous in human cognition and society, and shapes how we perceive and understand the world. Because categories reflect the needs and perspectives of their creators, no category system is entirely objective, and inbuilt biases can have harmful social consequences. Here we propose methods for measuring biases in hierarchical systems of categories, a common form of category organization with multiple levels of abstraction. We illustrate these methods by quantifying the extent to which library classification systems are biased in favour of western concepts and male authors. We analyze a large library data set including more than 3 million books organized into thousands of categories, and find that categories related to religion show greater western bias than do categories related to literature or history, and that books written by men are distributed more broadly across library classification systems than are books written by women. We also find that the Dewey Decimal Classification shows a greater level of bias than does the Library of Congress Classification. Although we focus on library classification as a case study, our methods are general, and can be used to measure biases in both natural and institutional category systems across a range of domains.
RESUMO
OBJECTIVES: The aim of this study was to determine the daily nursing care times of hospitalized inpatient oncology unit patients according to degree of acuity using the Perroca Patient Classification tool. DATA SOURCES: This study used a mixed method sequential explanatory design. The "Nursing Activity Record Form" and "Perroca Patient Classification Instrument" were used for quantitative data collection, and direct observation was performed for 175 hours via time-motion study. Descriptive statistics, between-group comparison, and correlation analysis were used for data analysis. Using a semistructured questionnaire, qualitative data were collected from individual in-depth interviews with seven nurses who participated in the quantitative part of the study. Qualitative data were analyzed by thematic analysis. The reporting of this study followed GRAMMS checklist. CONCLUSIONS: As a result of the integration of quantitative and qualitative data, daily nursing care duration was determined as 2 to 2.5 hours for Type 1 patients, 2.6 to 3.5 hours for Type 2 patients, 3.6 to 4.75 hours for Type 3 patients, and 4.76 to 5.5 hours for Type 4 patients. The findings showed that in an inpatient oncology unit, nursing care hours increased as patients' Perroca Patient Classification Instrument acuity grade increased; thus, the instrument was discriminative in determining patients' degree of acuity. IMPLICATIONS FOR NURSING PRACTICE: Nurse managers can utilize this study's results to plan daily assignments that are sensitive to patient care needs. The results can also help nurse managers to identify relationships between nurse staffing and patient outcomes at the unit level, as well as to develop ways to analyze such relationships.
Assuntos
Pacientes Internados , Enfermagem Oncológica , Humanos , Feminino , Masculino , Pacientes Internados/estatística & dados numéricos , Recursos Humanos de Enfermagem Hospitalar , Neoplasias/enfermagem , Neoplasias/classificação , Adulto , Pessoa de Meia-Idade , Inquéritos e Questionários , Fatores de Tempo , Gravidade do Paciente , Cuidados de Enfermagem/normas , Cuidados de Enfermagem/estatística & dados numéricos , Pesquisa QualitativaRESUMO
This study introduces AIEgen-Deep, an innovative classification program combining AIEgen fluorescent dyes, deep learning algorithms, and the Segment Anything Model (SAM) for accurate cancer cell identification. Our approach significantly reduces manual annotation efforts by 80%-90%. AIEgen-Deep demonstrates remarkable accuracy in recognizing cancer cell morphology, achieving a 75.9% accuracy rate across 26,693 images of eight different cell types. In binary classifications of healthy versus cancerous cells, it shows enhanced performance with an accuracy of 88.3% and a recall rate of 79.9%. The model effectively distinguishes between healthy cells (fibroblast and WBC) and various cancer cells (breast, bladder, and mesothelial), with accuracies of 89.0%, 88.6%, and 83.1%, respectively. Our method's broad applicability across different cancer types is anticipated to significantly contribute to early cancer detection and improve patient survival rates.
Assuntos
Técnicas Biossensoriais , Aprendizado Profundo , Neoplasias , Humanos , Algoritmos , Mama , Detecção Precoce de Câncer , Neoplasias/diagnóstico por imagemRESUMO
PURPOSE: Ultrasound evaluation of thyroid nodules is the preferred technique, but it is dependent on operator interpretation, leading to inter-observer variability. The current study aimed to determine the inter-physician consensus on nodular characteristics, risk categorization in the classification systems, and the need for fine needle aspiration puncture. METHODS: Four endocrinologists from the same center blindly evaluated 100 ultrasound images of thyroid nodules from 100 different patients. The following ultrasound features were evaluated: composition, echogenicity, margins, calcifications, and microcalcifications. Nodules were also classified according to ATA, EU-TIRADS, K-TIRADS, and ACR-TIRADS classifications. Krippendorff's alpha test was used to assess interobserver agreement. RESULTS: The interobserver agreement for ultrasound features was: Krippendorff's coefficient 0.80 (0.71-0.89) for composition, 0.59 (0.47-0.72) for echogenicity, 0.73 (0.57-0.88) for margins, 0.55 (0.40-0.69) for calcifications, and 0.50 (0.34-0.67) for microcalcifications. The concordance for the classification systems was 0.7 (0.61-0.80) for ATA, 0.63 (0.54-0.73) for EU-TIRADS, 0.64 (0.55-0.73) for K-TIRADS, and 0.68 (0.60-0.77) for K-TIRADS. The concordance in the indication of fine needle aspiration puncture (FNA) was 0.86 (0.71-1), 0.80 (0.71-0.88), 0.77 0.67-0.87), and 0.73 (0.64-0.83) for systems previously described respectively. CONCLUSIONS: Interobserver agreement was acceptable for the identification of nodules requiring cytologic study using various classification systems. However, limited concordance was observed in risk stratification and many ultrasonographic characteristics of the nodules.
Assuntos
Variações Dependentes do Observador , Glândula Tireoide , Nódulo da Glândula Tireoide , Ultrassonografia , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/patologia , Nódulo da Glândula Tireoide/classificação , Ultrassonografia/métodos , Feminino , Masculino , Pessoa de Meia-Idade , Glândula Tireoide/diagnóstico por imagem , Glândula Tireoide/patologia , Adulto , Idoso , Biópsia por Agulha FinaRESUMO
The Human Interference Scoring System (HISS) is a novel food-based diet-quality-classification system based on the existing NOVA method. HISS involves food and fluid allocation into categories from digital imagery based on food processing levels, followed by meal plan analysis using food-servings quantification. The primary purpose of this work was to evaluate the reliability of HISS. Trained nutrition professionals analyzed digital photographs from five hypothetical 24 h food recalls and categorized foods into one of four HISS categories. A secondary purpose was to assess the nutrient composition of the food recalls and other selected foods from the HISS categories. Participants effectively categorized foods into HISS categories, with only minor discrepancies noted. High inter-rater reliability was observed in the outer HISS categories: unprocessed and ultra-processed foods. Ultra-processed items consistently displayed elevated energy, carbohydrates, and sugar compared to unprocessed foods, while unprocessed foods exhibited notably higher dietary fiber. This study introduces the HISS as a potentially useful tool for quantifying a food-quality-based system using digital-photography-based assessments. Its high inter-rater reliability and ability to capture relationships between food processing levels and nutrient composition make it a promising method for assessing dietary habits and food quality.
Assuntos
Fast Foods , Qualidade dos Alimentos , Humanos , Reprodutibilidade dos Testes , Valor Nutritivo , Dieta , Manipulação de Alimentos , Ingestão de EnergiaRESUMO
BACKGROUND: Ultra-processed foods (UPF), as proposed by the Nova food classification system, are linked to the development of obesity and several non-communicable chronic diseases and deaths from all causes. The Nova-UPF screener developed in Brazil is a simple and quick tool to assess and monitor the consumption of these food products. The aim of this study was to adapt and validate, against the 24-hour dietary recall, this short food-based screener to assess UPF consumption in the Senegalese context. METHODS: The tool adaptation was undertaken using DELPHI methodology with national experts and data from a food market survey. Following the adaptation, sub-categories were renamed, restructured and new ones introduced. The validation study was conducted in the urban area of Dakar in a convenience sample of 301 adults, using as a reference the dietary share of UPF on the day prior to the survey, expressed as a percentage of total energy intake obtained via 24-hour recall. Association between the Nova-UPF score and the dietary share of UPF was evaluated using linear regression models. The Pabak index was used to assess the agreement in participants' classification according to quintiles of Nova-UPF score and quintiles of the dietary share of UPF. RESULTS: The results show a linear and positive association (p-value < 0.001) between intervals of the Nova-UPF score and the average dietary share of UPF. There was a near perfect agreement in the distribution of individuals according to score's quintiles and UPF dietary share quintiles (Pabak index = 0.84). CONCLUSION: The study concluded that the score provided by the Nova-UPF screener adapted to the Senegalese context is a valid estimate of UPF consumption.
RESUMO
The practice of documenting pharmacist interventions (PIs) has been endorsed by many hospital pharmacists' societies and organizations worldwide. Current systems for recording PIs have been developed to generate data on better patient and healthcare outcomes, but harmonization and transferability are apparently minimal. The present work aims to provide a descriptive and comprehensive overview of the currently utilized PI documentation and classification tools contributing to increased evidence systematization. A systematic literature search was conducted in PubMed, Scopus, Web of Science and the Cumulative Index to Nursing and Allied Health Literature. Studies from 2008, after the release of the Basel Statements, were included if interventions were made by hospital or clinical pharmacists in a global hospital setting. Publications quality assessment was accomplished using the Mixed Methods Appraisal Tool. A total of 26 studies were included. Three studies did not refer to the documentation/classification method, 10 used an in-house developed documentation/classification method, seven used externally developed documentation/classification tools and six described method validation or translation. Evidence confirmed that most documentation/classification systems are designed in-house, but external development and validation of PI systems to be used in hospital practice is gradually increasing. Reports on validated PI documentation/classification tools that are being used in hospital clinical practice are limited, including in countries with advanced hospital pharmacy practice. Needs and gaps in practice were identified. Further research should be conducted to understand why using validated documentation/classification methods is not a disseminated practice, knowing patients' and organizational advantages.
Assuntos
Farmacêuticos , Serviço de Farmácia Hospitalar , Humanos , Documentação , HospitaisRESUMO
In this presidential address, I argue for the importance of state-created categories and classification systems that determine eligibility for tangible and intangible resources. Through classification systems based on rules and regulations that reflect powerful interests and ideologies, bureaucracies maintain entrenched inequality systems that include, exclude, and neglect. I propose adopting a critical perspective when using formalized categories in our work, which would acknowledge the constructed nature of those categories, their naturalization through everyday practices, and their misalignments with lived experiences. This lens can reveal the systemic structures that engender both enduring patterns of inequality and state classification systems, and reframe questions about the people the state sorts into the categories we use. I end with a brief discussion of the benefits that can accrue from expanding our theoretical repertoires by including knowledge produced in the Global South.
RESUMO
BACKGROUND: Adult congenital heart disease (ACHD) patients pose unique challenges in identifying the time for transplantation and factors influencing outcomes. OBJECTIVE: To identify hemodynamic, functional, and laboratory parameters that correlate with 1- and 10-year outcomes in ACHD patients considered for transplantation. METHODS: A retrospective chart review of long-term outcomes in adult patients with congenital heart disease (CHD) evaluated for heart or heart + additional organ transplant between 2004 and 2014 at our center was performed. A machine learning decision tree model was used to evaluate multiple clinical parameters correlating with 1- and 10-year survival. RESULTS: We identified 58 patients meeting criteria. D-transposition of the great arteries (D-TGA) with atrial switch operation (20.7%), tetralogy of Fallot/pulmonary atresia (15.5%), and tricuspid atresia (13.8%) were the most common diagnosis for transplant. Single ventricle patients were most likely to be listed for transplantation (39.8% of evaluated patients). Among a comprehensive list of clinical factors, invasive hemodynamic parameters (pulmonary capillary wedge pressure (PCWP), systemic vascular pressure (SVP), and end diastolic pressures (EDP) most correlated with 1- and 10-year outcomes. Transplanted patients with SVP < 14 and non- transplanted patients with PCWP < 15 had 100% survival 1-year post-transplantation. CONCLUSION: For the first time, our study identifies that hemodynamic parameters most strongly correlate with 1- and 10-year outcomes in ACHD patients considered for transplantation, using a data-driven machine learning model.
Assuntos
Cardiopatias Congênitas , Transplante de Coração , Transposição dos Grandes Vasos , Adulto , Humanos , Cardiopatias Congênitas/cirurgia , Transposição dos Grandes Vasos/etiologia , Estudos Retrospectivos , Transplante de Coração/efeitos adversosRESUMO
INTRODUCTION: Magnetic resonance (MR) tractography can be used to study the spatial relations between gliomas and white matter (WM) tracts. Various spatial patterns of WM tract alterations have been described in the literature. We reviewed classification systems of these patterns, and investigated whether low-grade gliomas (LGGs) and high-grade gliomas (HGGs) demonstrate distinct spatial WM tract alteration patterns. METHODS: We conducted a systematic review and meta-analysis to summarize the evidence regarding MR tractography studies that investigated spatial WM tract alteration patterns in glioma patients. RESULTS: Eleven studies were included. Overall, four spatial WM tract alteration patterns were reported in the current literature: displacement, infiltration, disruption/destruction and edematous. There was a considerable heterogeneity in the operational definitions of these terms. In a subset of studies, sufficient homogeneity in the classification systems was found to analyze pooled results for the displacement and infiltration patterns. Our meta-analyses suggested that LGGs displaced WM tracts significantly more often than HGGs (n = 259 patients, RR: 1.79, 95% CI [1.14, 2.79], I2 = 51%). No significant differences between LGGs and HGGs were found for WM tract infiltration (n = 196 patients, RR: 1.19, 95% CI [0.95, 1.50], I2 = 4%). CONCLUSIONS: The low number of included studies and their considerable methodological heterogeneity emphasize the need for a more uniform classification system to study spatial WM tract alteration patterns using MR tractography. This review provides a first step towards such a classification system, by showing that the current literature is inconclusive and that the ability of fractional anisotropy (FA) to define spatial WM tract alteration patterns should be critically evaluated. We found variations in spatial WM tract alteration patterns between LGGs and HGGs, when specifically examining displacement and infiltration in a subset of the included studies.