ABSTRACT
BACKGROUND: Ultrasound (US)-guided tunneled femoral peripherally inserted central catheters (PICCs) are a safe central venous access option in infants and neonates. Studies have shown, however, that femoral central venous access has the potential for high central line-associated bloodstream infection (CLABSI) rates with a significant increase in risk around line day 30, though no studies have evaluated these risks exclusively for tunneled femoral PICCs. OBJECTIVE: The primary purpose of this study was to evaluate the relationship between line duration and the risk of CLABSI in tunneled femoral PICCs in children. MATERIALS AND METHODS: Four hundred forty-five patients (196 females, 249 males; median age: 49.4 days; median weight: 3.7 kg) who underwent 573 tunneled femoral PICC placements or exchanges from Jan. 1, 2017, to Jan. 31, 2020, were included in the study. All tunneled femoral PICCs were placed using US technique and catheter specifications, including catheter size (French) and length (cm), were retrieved from the electronic medical record. The location of the PICC placement, the number of lumens, the laterality of placement, and the patient's age and weight were also recorded. Only non-mucosal barrier injury CLABSIs, according to the Centers for Disease Control and Prevention (CDC) definitions, were counted as CLABSI for this study. The number of central line days until a CLABSI event was analyzed with an accelerated failure time model using the exponential, Weibull, and log-normal distributions to determine the probability of a CLABSI over time, taking into consideration the recorded covariates. RESULTS: Tunneled femoral PICC placements accounted for 14,855 line days, during which 20 non-mucosal barrier injury CLABSIs (CLABSI rate of 1.35 per 1,000 line days) occurred during the study period. The highest CLABSI rate occurred in PICCs placed in the neonatal intensive care unit (NICU) at 2.01 per 1,000 line days and the lowest occurred in PICCs placed in interventional radiology at 0.26 per 1,000 line days. Overall, PICCs placed outside of interventional radiology had a CLABSI rate of 1.72 per 1,000 line days. The CLABSI rate during the first 30 days a line was in situ was lower than the rate after 30 days (0.51 per 1,000 line days vs. 3.06 per 1,000 line days, respectively). Statistical modeling and hazard estimation using the Akaike information criterion corrected for small sample size (AICc)-average of log-normal, Weibull and exponential distributions demonstrate the daily risk of CLABSI rapidly increases from day 1 to day 30, with the risk remaining high for the duration of line days. CONCLUSION: While tunneled femoral PICCs are a relatively safe and effective central venous access alternative, the rate of CLABSI appears to rapidly increase with increasing line days until around day 30 and then remains high thereafter.
Subject(s)
Catheter-Related Infections , Catheterization, Central Venous , Catheterization, Peripheral , Central Venous Catheters , Sepsis , Catheter-Related Infections/diagnostic imaging , Catheter-Related Infections/epidemiology , Catheterization, Central Venous/adverse effects , Catheterization, Peripheral/adverse effects , Central Venous Catheters/adverse effects , Child , Female , Humans , Infant , Infant, Newborn , Male , Middle Aged , Retrospective Studies , Risk FactorsABSTRACT
BACKGROUND: Children with forehead port-wine stains (PWSs) are at risk of Sturge-Weber syndrome (SWS). However, most will not develop neurologic manifestations. OBJECTIVE: To identify children at greatest risk of SWS. METHOD: In this retrospective cohort study of children with a forehead PWS, PWSs were classified as "large segmental" (half or more of a contiguous area of the hemiforehead or median pattern) or "trace/small segmental" (less than half of the hemiforehead). The outcome measure was a diagnosis of SWS. RESULTS: Ninety-six children had a forehead PWS. Fifty-one had a large segmental PWS, and 45 had a trace/small segmental PWS. All 21 children with SWS had large segmental forehead PWSs. Large segmental forehead PWSs had a higher specificity (0.71 vs 0.27, P < .0001) and a higher positive predictive value (0.41 vs 0.22, P < .0001) for SWS than any forehead involvement by a PWS. LIMITATIONS: Retrospective study at a referral center. CONCLUSION: Children with large segmental forehead PWSs are at highest risk of SWS.
Subject(s)
Facial Dermatoses/etiology , Forehead/pathology , Port-Wine Stain/etiology , Sturge-Weber Syndrome/complications , Cheek/pathology , Child , Child, Preschool , Facial Dermatoses/pathology , Female , Humans , Infant , Infant, Newborn , Male , Neuroimaging , Organ Specificity , Paresis/diagnostic imaging , Paresis/etiology , Port-Wine Stain/pathology , Retrospective Studies , Risk , Seizures/diagnostic imaging , Seizures/etiology , Sturge-Weber Syndrome/diagnosis , Sturge-Weber Syndrome/diagnostic imaging , Sturge-Weber Syndrome/epidemiologyABSTRACT
Black swans are improbable events that nonetheless occur-often with profound consequences. Such events drive important transitions in social systems (e.g., banking collapses) and physical systems (e.g., earthquakes), and yet it remains unclear the extent to which ecological population numbers buffer or suffer from such extremes. Here, we estimate the prevalence and direction of black-swan events (heavy-tailed process noise) in 609 animal populations after accounting for population dynamics (productivity, density dependence, and typical stochasticity). We find strong evidence for black-swan events in [Formula: see text]4% of populations. These events occur most frequently for birds (7%), mammals (5%), and insects (3%) and are not explained by any life-history covariates but tend to be driven by external perturbations such as climate, severe winters, predators, parasites, or the combined effect of multiple factors. Black-swan events manifest primarily as population die-offs and crashes (86%) rather than unexpected increases, and ignoring heavy-tailed process noise leads to an underestimate in the magnitude of population crashes. We suggest modelers consider heavy-tailed, downward-skewed probability distributions, such as the skewed Student [Formula: see text] used here, when making forecasts of population abundance. Our results demonstrate the importance of both modeling heavy-tailed downward events in populations, and developing conservation strategies that are robust to ecological surprises.
Subject(s)
Anseriformes , Animals , Mammals , Models, Theoretical , Population DynamicsABSTRACT
This study develops a quantitative framework for estimating the effects of extreme suspended-sediment events (SSC>25 mg L(-1)) on virtual populations of Chinook (Oncorhynchus tshawytscha) and coho (O. kisutch) salmon in a coastal watershed of British Columbia, Canada. We used a life history model coupled with a dose-response model to evaluate the populations' responses to a set of simulated suspended sediments scenarios. Our results indicate that a linear increase in SSC produces non-linear declining trajectories in both Chinook and coho populations, but this decline was more evident for Chinook salmon despite their shorter fresh-water residence. The model presented here can provide insights into SSC impacts on population responses of salmonids and potentially assist resource managers when planning conservation or remediation strategies.
Subject(s)
Geologic Sediments , Salmon/physiology , Animals , British Columbia , Life Cycle Stages , Monte Carlo Method , ReproductionABSTRACT
Climate change is likely to lead to increasing population variability and extinction risk. Theoretically, greater population diversity should buffer against rising climate variability, and this theory is often invoked as a reason for greater conservation. However, this has rarely been quantified. Here we show how a portfolio approach to managing population diversity can inform metapopulation conservation priorities in a changing world. We develop a salmon metapopulation model in which productivity is driven by spatially distributed thermal tolerance and patterns of short- and long-term climate change. We then implement spatial conservation scenarios that control population carrying capacities and evaluate the metapopulation portfolios as a financial manager might: along axes of conservation risk and return. We show that preserving a diversity of thermal tolerances minimizes risk, given environmental stochasticity, and ensures persistence, given long-term environmental change. When the thermal tolerances of populations are unknown, doubling the number of populations conserved may nearly halve expected metapopulation variability. However, this reduction in variability can come at the expense of long-term persistence if climate change increasingly restricts available habitat, forcing ecological managers to balance society's desire for short-term stability and long-term viability. Our findings suggest the importance of conserving the processes that promote thermal-tolerance diversity, such as genetic diversity, habitat heterogeneity, and natural disturbance regimes, and demonstrate that diverse natural portfolios may be critical for metapopulation conservation in the face of increasing climate variability and change.
Subject(s)
Climate Change , Conservation of Natural Resources/methods , Ecosystem , Models, Biological , Animals , Salmon/physiologyABSTRACT
Species invasions have a range of negative effects on recipient ecosystems, and many occur at a scale and magnitude that preclude complete eradication. When complete extirpation is unlikely with available management resources, an effective strategy may be to suppress invasive populations below levels predicted to cause undesirable ecological change. We illustrated this approach by developing and testing targets for the control of invasive Indo-Pacific lionfish (Pterois volitans and P. miles) on Western Atlantic coral reefs. We first developed a size-structured simulation model of predation by lionfish on native fish communities, which we used to predict threshold densities of lionfish beyond which native fish biomass should decline. We then tested our predictions by experimentally manipulating lionfish densities above or below reef-specific thresholds, and monitoring the consequences for native fish populations on 24 Bahamian patch reefs over 18 months. We found that reducing lionfish below predicted threshold densities effectively protected native fish community biomass from predation-induced declines. Reductions in density of 2592%, depending on the reef, were required to suppress lionfish below levels predicted to overconsume prey. On reefs where lionfish were kept below threshold densities, native prey fish biomass increased by 5070%. Gains in small (<6 cm) size classes of native fishes translated into lagged increases in larger size classes over time. The biomass of larger individuals (>15 cm total length), including ecologically important grazers and economically important fisheries species, had increased by 1065% by the end of the experiment. Crucially, similar gains in prey fish biomass were realized on reefs subjected to partial and full removal of lionfish, but partial removals took 30% less time to implement. By contrast, the biomass of small native fishes declined by >50% on all reefs with lionfish densities exceeding reef-specific thresholds. Large inter-reef variation in the biomass of prey fishes at the outset of the study, which influences the threshold density of lionfish, means that we could not identify a single rule of thumb for guiding control efforts. However, our model provides a method for setting reef-specific targets for population control using local monitoring data. Our work is the first to demonstrate that for ongoing invasions, suppressing invaders below densities that cause environmental harm can have a similar effect, in terms of protecting the native ecosystem on a local scale, to achieving complete eradication.
Subject(s)
Conservation of Natural Resources/methods , Environmental Monitoring/methods , Fishes/physiology , Introduced Species , Models, Biological , Pest Control , Animal Distribution , Animals , Computer Simulation , Coral Reefs , Fishes/classificationABSTRACT
Tunas and their relatives dominate the world's largest ecosystems and sustain some of the most valuable fisheries. The impacts of fishing on these species have been debated intensively over the past decade, giving rise to divergent views on the scale and extent of the impacts of fisheries on pelagic ecosystems. We use all available age-structured stock assessments to evaluate the adult biomass trajectories and exploitation status of 26 populations of tunas and their relatives (17 tunas, 5 mackerels, and 4 Spanish mackerels) from 1954 to 2006. Overall, populations have declined, on average, by 60% over the past half century, but the decline in the total adult biomass is lower (52%), driven by a few abundant populations. The trajectories of individual populations depend on the interaction between life histories, ecology, and fishing pressure. The steepest declines are exhibited by two distinct groups: the largest, longest lived, highest value temperate tunas and the smaller, short-lived mackerels, both with most of their populations being overexploited. The remaining populations, mostly tropical tunas, have been fished down to approximately maximum sustainable yield levels, preventing further expansion of catches in these fisheries. Fishing mortality has increased steadily to the point where around 12.5% of the tunas and their relatives are caught each year globally. Overcapacity of these fisheries is jeopardizing their long-term sustainability. To guarantee higher catches, stabilize profits, and reduce collateral impacts on marine ecosystems requires the rebuilding of overexploited populations and stricter management measures to reduce overcapacity and regulate threatening trade.
Subject(s)
Tuna/genetics , Animals , Biomass , Conservation of Natural Resources , Ecology , Ecosystem , Environment , Fisheries , Fishes/physiology , Models, Statistical , Perciformes , Phylogeny , Population Dynamics , Species SpecificityABSTRACT
OBJECTIVES: Clinicians' perceptions of scarcity influence rationing of critical care resources, which may lead to serious adverse outcomes for patients who are denied access. We sought to better understand the phenomenon of scarcity in the critical care setting. DESIGN: Qualitative research methods. We used purposeful sampling to recruit ICU clinicians who were frequently involved in decisions to allocate ICU resources. Thematic analysis was performed to identify concepts related to the phenomenon of scarcity. SETTING: An ICU of a university-affiliated hospital in Toronto, Canada, between October and December 2007. SUBJECTS: We conducted 22 interviews with 12 ICU physicians, 4 ICU fellows, 2 ICU nursing team leaders, and 4 ICU resource nurses. MAIN RESULTS: The perception of scarcity arose from a complex interaction of factors within the institution including: 1) practices of non-ICU physicians (e.g., failure to specify end-of-life treatment plans or to secure an ICU bed prior to elective high-risk surgery), 2) family demands for life support and clinicians' perception of a lack of legal support if they opposed these, and 3) inability to transfer patients to non-ICU care settings in a timely manner. Implications of scarcity included: 1) diversions of critically ill patients, 2) premature patient transfers, 3) temporary delivery of critical care in non-ICU locations (e.g., emergency department, postanesthesia care unit), and 4) interprofessional conflicts. CONCLUSIONS: ICU clinicians' perceptions of scarcity may lead to rationing of critical care resources. We found that nonmedical factors strongly influenced prioritization activity, both for admission and discharge. Although scarcity of ICU beds might be mitigated by process improvements such as patient flow or proactive communication, our findings highlight the importance of a fair process for inevitable limit setting at the bedside.
Subject(s)
Health Care Rationing/organization & administration , Hospitals, University/organization & administration , Intensive Care Units/organization & administration , Perception , Humans , Length of Stay , Ontario , Patient Discharge , Patient TransferABSTRACT
Although there are many indicators of endangerment (i.e., whether populations or species meet criteria that justify conservation action), their reliability has rarely been tested. Such indicators may fail to identify that a population or species meets criteria for conservation action (false negative) or may incorrectly show that such criteria have been met (false positive). To quantify the rate of both types of error for 20 commonly used indicators of declining abundance (threat indicators), we used receiver operating characteristic curves derived from historical (1938-2007) data for 18 sockeye salmon (Oncorhynchus nerka) populations in the Fraser River, British Columbia, Canada. We retrospectively determined each population's yearly status (reflected by change in abundance over time) on the basis of each indicator. We then compared that population's status in a given year with the status in subsequent years (determined by the magnitude of decline in abundance across those years). For each sockeye population, we calculated how often each indicator of past status matched subsequent status. No single threat indicator provided error-free estimates of status, but indicators that reflected the extent (i.e., magnitude) of past decline in abundance (through comparison of current abundance with some historical baseline abundance) tended to better reflect status in subsequent years than the rate of decline over the previous 3 generations (a widely used indicator). We recommend that when possible, the reliability of various threat indicators be evaluated with empirical analyses before such indicators are used to determine the need for conservation action. These indicators should include estimates from the entire data set to take into account a historical baseline.
Subject(s)
Conservation of Natural Resources/methods , Salmon/physiology , Animal Migration , Animals , British Columbia , Models, Biological , Population Density , ROC Curve , Reproduction , Retrospective Studies , Seasons , Sensitivity and SpecificityABSTRACT
Despite improvements in communication, errors in end-of-life care continue to be made. For example, healthcare professionals may take direction from the wrong substitute decision-maker, or from family members when the patient is capable; permit families to propose treatment plans; conflate values and beliefs with prior expressed wishes or fail to inquire about prior expressed wishes. Sometimes healthcare professionals know what prior expressed wishes are but do not respect them; others do not believe they have enough time to have an end-of-life discussion or lack the confidence, willingness and skills to manage one. As has been shown in initiatives to improve in surgical safety, the use of a checklist presents opportunities to potentially minimize common mistakes and errors. When engaging in end-of-life care, a checklist can help focus on what needs to be communicated rather than how it needs to be communicated. We propose a checklist to support healthcare professionals in meeting their ethical and legal obligations to patients at the end of life. The checklist should minimize common mistakes, and in situations where irreconcilable conflict is unavoidable, it will ensure that both healthcare teams and family members are informed and prepared.
Subject(s)
Checklist/methods , Critical Illness/therapy , Terminal Care/ethics , Aged, 80 and over , Female , Humans , Informed Consent , Patient Care Planning/ethics , Patient Care Planning/legislation & jurisprudence , Resuscitation Orders/ethics , Resuscitation Orders/legislation & jurisprudence , Terminal Care/legislation & jurisprudence , Third-Party Consent/ethics , Third-Party Consent/legislation & jurisprudenceABSTRACT
There is little debate about the importance of ethics in health care, and clearly defined rules, regulations, and oaths help ensure patients' trust in the care they receive. However, standards are not as well established for the data professions within health care, even though the responsibility to treat patients in an ethical way extends to the data collected about them. Increasingly, data scientists, analysts, and engineers are becoming fiduciarily responsible for patient safety, treatment, and outcomes, and will require training and tools to meet this responsibility. We developed a data ethics checklist that enables users to consider the possible ethical issues that arise from the development and use of data products. The combination of ethics training for data professionals, a data ethics checklist as part of project management, and a data ethics committee holds potential for providing a framework to initiate dialogues about data ethics and can serve as an ethical touchstone for rapid use within typical analytic workflows, and we recommend the use of this or equivalent tools in deploying new data products in hospitals.
Subject(s)
Codes of Ethics , Data Science/ethics , Hospitals, Pediatric/ethics , Checklist , Ethics, Clinical , Ethics, Professional , Hospital Information Systems/ethics , WashingtonSubject(s)
Biomass , Conservation of Natural Resources/methods , Introduced Species , Models, Biological , Perciformes , Animals , TemperatureABSTRACT
BACKGROUND: Intensive care physicians often must rely on substitute decision makers to address all dimensions of the construct of "best interest" for incapable, critically ill patients. This task involves identifying prior wishes and to facilitate the substitute decision maker's understanding of the incapable patient's condition and their likely response to treatment. We sought to determine how well such discussions are documented in a typical intensive care unit. METHODS: Using a quality of communication instrument developed from a literature search and expert opinion, 2 investigators transcribed and analyzed 260 handwritten communications for 105 critically ill patients who died in the intensive care unit between January and June 2006. Cohen's kappa was calculated before analysis and then disagreements were resolved by consensus. We report results on a per-patient basis to represent documented communication as a process leading up to the time of death in the ICU. We report frequencies and percentages for discrete data, median (m) and interquartile range (IQR) for continuous data. RESULTS: Our cohort was elderly (m 72, IQR 58-81 years) and had high APACHE II scores predictive of a high probability of death (m 28, IQR 23-36). Length of stay in the intensive care unit prior to death was short (m 2, IQR 1-5 days), and withdrawal of life support preceded death for more than half (n 57, 54%). Brain death criteria were present for 18 patients (17%). Although intensivists' communications were timely (median 17 h from admission to critical care), the person consenting on behalf of the incapable patient was explicitly documented for only 10% of patients. Life support strategies at the time of communication were noted in 45% of charts, and options for their future use were presented in 88%. Considerations relevant to determining the patient's best interest in relation to the treatment plan were not well documented. While explicit survival estimates were noted in 50% of charts, physicians infrequently documented their own predictions of the patient's functional status (20%), anticipated need for chronic care (0%), or post ICU quality of life (3%). Similarly, documentation of the patient's own perspectives on these ranged from 2-18%. CONCLUSIONS: Intensivists' documentation of their communication with substitute decision makers frequently outlined the proposed plan of treatment, but often lacked evidence of discussion relevant to whether the treatment plan was expected to improve the patient's condition. Legislative standards for determination of best interest, such as the Health Care Consent Act in Ontario, Canada, may provide guidance for intensivists to optimally document the rationales for proposed treatment plans.
Subject(s)
Communication , Critical Care/standards , Critical Illness , Decision Making/ethics , Intensive Care Units , Medical Records , Patient Care Planning/standards , Patient-Centered Care/standards , Physicians/standards , Practice Patterns, Physicians'/standards , Adult , Critical Care/methods , Ethics Consultation , Female , Humans , Length of Stay , Male , Middle Aged , Ontario , Palliative Care , Patient-Centered Care/ethics , Retrospective Studies , Terminal Care , Third-Party Consent/ethics , WorkforceABSTRACT
Peri-urban lakes increasingly experience intensified anthropogenic impacts as watershed uses and developments increase. Cultus Lake is an oligo-mesotrophic, peri-urban lake near Vancouver, British Columbia, Canada that experiences significant seasonal tourism, anthropogenic nutrient loadings, and associated cultural eutrophication. Left unabated, these cumulative stresses threaten the critical habitat and persistence of two endemic species at risk (Coastrange Sculpin, Cultus population; Cultus Lake sockeye salmon) and diverse lake-derived ecosystem services. We constructed water and nutrient budgets for the Cultus Lake watershed to identify and quantify major sources and loadings of nitrogen (N) and phosphorus (P). A steady-state water quality model, calibrated against current loadings and limnological data, was used to reconstruct the historic lake trophic status and explore limnological changes in response to realistic development and mitigation scenarios. Significant local P loadings to Cultus Lake arise from septic leaching (19%) and migratory gull guano deposition (22%). Watershed runoff contributes the majority of total P (53%) and N (73%) loads to Cultus Lake, with substantial local N contributions arising from the agricultural Columbia Valley (41% of total N load). However, we estimate that up to 66% of N and 70% of P in watershed runoff is ultimately sourced via deposition from the nutrient-contaminated regional airshed, with direct atmospheric deposition on the lake surface contributing an additional 17% of N and 5% of P. Thus, atmospheric deposition is the largest single source of nutrient loading to Cultus Lake, cumulatively responsible for 63% and 42% of total N and P loadings, respectively. Modeled future loading scenarios suggest Cultus Lake could become mesotrophic within the next 25 years, highlighting a heightened need for near-term abatement of P loads. Although mitigating P loads from local watershed sources will slow the rate of eutrophication, management efforts targeting reductions in atmospheric-P within the regional airshed are necessary to halt or reverse lake eutrophication, and conserve both critical habitat for imperiled species at risk and lake-derived ecosystem services.
Subject(s)
Ecosystem , Environmental Monitoring , Eutrophication , Lakes , Atmosphere/chemistry , Calibration , Cities , Models, Theoretical , Nitrogen/analysis , Phosphorus/analysis , Water , Water QualityABSTRACT
The productivity and biomass of pristine coral reef ecosystems is poorly understood, particularly in the Caribbean where communities have been impacted by overfishing and multiple other stressors over centuries. Using historical data on the spatial distribution and abundance of the extinct Caribbean monk seal (Monachus tropicalis), this study reconstructs the population size, structure and ecological role of this once common predator within coral reef communities, and provides evidence that historical reefs supported biomasses of fishes and invertebrates up to six times greater than those found on typical modern Caribbean reefs. An estimated 233,000-338,000 monk seals were distributed among 13 colonies across the Caribbean. The biomass of reef fishes and invertebrates required to support historical seal populations was 732-1018 gm(-2) of reefs, which exceeds that found on any Caribbean reef today and is comparable with those measured in remote Pacific reefs. Quantitative estimates of historically dense monk seal colonies and their consumption rates on pristine reefs provide concrete data on the magnitude of decline in animal biomass on Caribbean coral reefs. Realistic reconstruction of these past ecosystems is critical to understanding the profound and long-lasting effect of human hunting on the functioning of coral reef ecosystems.
Subject(s)
Ecosystem , Seals, Earless/growth & development , Animals , Anthozoa/growth & development , Archaeology , Caribbean Region , Conservation of Natural Resources , Extinction, Biological , Female , Food Chain , Male , Population DynamicsABSTRACT
Geographic variability in abundance can be driven by multiple physical and biological factors operating at multiple scales. To understand the determinants of larval trematode prevalence within populations of the marine snail host Littorina littorea, we quantified many physical and biological variables at 28 New England intertidal sites. A hierarchical, mixed-effects model identified the abundance of gulls (the final hosts and dispersive agents of infective trematode stages) and snail size (a proxy for time of exposure) as the primary factors associated with trematode prevalence. The predominant influence of these variables coupled with routinely low infection rates (21 of the 28 populations exhibited prevalence <12%) suggest broad-scale recruitment limitation of trematodes. Although infection rates were spatially variable, formal analyses detected no regional spatial gradients in either trematode prevalence or independent environmental variables. Trematode prevalence appears to be predominantly determined by local site characteristics favoring high gull abundance.
Subject(s)
Geography , Host-Parasite Interactions , Snails/parasitology , Trematoda/growth & development , Animals , Bayes Theorem , Biodiversity , Larva/growth & development , Markov Chains , Monte Carlo Method , Oceans and Seas , Population Density , Population Dynamics , Prevalence , Species SpecificityABSTRACT
BACKGROUND: As more patients survive the acute respiratory distress syndrome, an understanding of the long-term outcomes of this condition is needed. METHODS: We evaluated 109 survivors of the acute respiratory distress syndrome 3, 6, and 12 months after discharge from the intensive care unit. At each visit, patients were interviewed and underwent a physical examination, pulmonary-function testing, a six-minute-walk test, and a quality-of-life evaluation. RESULTS: Patients who survived the acute respiratory distress syndrome were young (median age, 45 years) and severely ill (median Acute Physiology, Age, and Chronic Health Evaluation score, 23) and had a long stay in the intensive care unit (median, 25 days). Patients had lost 18 percent of their base-line body weight by the time they were discharged from the intensive care unit and stated that muscle weakness and fatigue were the reasons for their functional limitation. Lung volume and spirometric measurements were normal by 6 months, but carbon monoxide diffusion capacity remained low throughout the 12-month follow-up. No patients required supplemental oxygen at 12 months, but 6 percent of patients had arterial oxygen saturation values below 88 percent during exercise. The median score for the physical role domain of the Medical Outcomes Study 36-item Short-Form General Health Survey (a health-related quality-of-life measure) increased from 0 at 3 months to 25 at 12 months (score in the normal population, 84). The distance walked in six minutes increased from a median of 281 m at 3 months to 422 m at 12 months; all values were lower than predicted. The absence of systemic corticosteroid treatment, the absence of illness acquired during the intensive care unit stay, and rapid resolution of lung injury and multiorgan dysfunction were associated with better functional status during the one-year follow-up. CONCLUSIONS: Survivors of the acute respiratory distress syndrome have persistent functional disability one year after discharge from the intensive care unit. Most patients have extrapulmonary conditions, with muscle wasting and weakness being most prominent.