Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Anesth Analg ; 138(3): 645-654, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38364244

RESUMEN

BACKGROUND: Transfusion of packed red blood cells (pRBCs) is still associated with risks. This study aims to determine whether renal function deterioration in the context of individual transfusions in individual patients can be predicted using machine learning. Recipient and donor characteristics linked to increased risk are identified. METHODS: This study was registered at ClinicalTrials.gov (NCT05466370) and was conducted after local ethics committee approval. We evaluated 3366 transfusion episodes from a university hospital between October 31, 2016, and August 31, 2020. Random forest models were tuned and trained via Python auto-sklearn package to predict acute kidney injury (AKI). The models included recipients' and donors' demographic parameters and laboratory values, donor questionnaire results, and the age of the pRBCs. Bootstrapping on the test dataset was used to calculate the means and standard deviations of various performance metrics. RESULTS: AKI as defined by a modified Kidney Disease Improving Global Outcomes (KDIGO) criterion developed after 17.4% transfusion episodes (base rate). AKI could be predicted with an area under the curve of the receiver operating characteristic (AUC-ROC) of 0.73 ± 0.02. The negative (NPV) and positive (PPV) predictive values were 0.90 ± 0.02 and 0.32 ± 0.03, respectively. Feature importance and relative risk analyses revealed that donor features were far less important than recipient features for predicting posttransfusion AKI. CONCLUSIONS: Surprisingly, only the recipients' characteristics played a decisive role in AKI prediction. Based on this result, we speculate that the selection of a specific pRBC may have less influence than recipient characteristics.


Asunto(s)
Lesión Renal Aguda , Riñón , Humanos , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/etiología , Lesión Renal Aguda/terapia , Transfusión Sanguínea , Estudios Retrospectivos , Medición de Riesgo/métodos , Curva ROC
2.
Eur J Anaesthesiol ; 39(9): 766-773, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35852544

RESUMEN

BACKGROUND: Massive perioperative allogeneic blood transfusion, that is, perioperative transfusion of more than 10 units of packed red blood cells (pRBC), is one of the main contributors to perioperative morbidity and mortality in cardiac surgery. Prediction of perioperative blood transfusion might enable preemptive treatment strategies to reduce risk and improve patient outcomes while reducing resource utilisation. We, therefore, investigated the precision of five different machine learning algorithms to predict the occurrence of massive perioperative allogeneic blood transfusion in cardiac surgery at our centre. OBJECTIVE: Is it possible to predict massive perioperative allogeneic blood transfusion using machine learning? DESIGN: Retrospective, observational study. SETTING: Single adult cardiac surgery centre in Austria between 01 January 2010 and 31 December 2019. PATIENTS: Patients undergoing cardiac surgery. MAIN OUTCOME MEASURES: Primary outcome measures were the number of patients receiving at least 10 units pRBC, the area under the curve for the receiver operating characteristics curve, the F1 score, and the negative-predictive (NPV) and positive-predictive values (PPV) of the five machine learning algorithms used to predict massive perioperative allogeneic blood transfusion. RESULTS: A total of 3782 (1124 female:) patients were enrolled and 139 received at least 10 pRBC units. Using all features available at hospital admission, massive perioperative allogeneic blood transfusion could be excluded rather accurately. The best area under the curve was achieved by Random Forests: 0.810 (0.76 to 0.86) with high NPV of 0.99). This was still true using only the eight most important features [area under the curve 0.800 (0.75 to 0.85)]. CONCLUSION: Machine learning models may provide clinical decision support as to which patients to focus on for perioperative preventive treatment in order to preemptively reduce massive perioperative allogeneic blood transfusion by predicting, which patients are not at risk. TRIAL REGISTRATION: Johannes Kepler University Ethics Committee Study Number 1091/2021, Clinicaltrials.gov identifier NCT04856618.


Asunto(s)
Procedimientos Quirúrgicos Cardíacos , Trasplante de Células Madre Hematopoyéticas , Adulto , Transfusión Sanguínea , Procedimientos Quirúrgicos Cardíacos/efectos adversos , Femenino , Humanos , Aprendizaje Automático , Estudios Retrospectivos
3.
J Med Syst ; 46(5): 23, 2022 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-35348909

RESUMEN

Many previous studies claim to have developed machine learning models that diagnose COVID-19 from blood tests. However, we hypothesize that changes in the underlying distribution of the data, so called domain shifts, affect the predictive performance and reliability and are a reason for the failure of such machine learning models in clinical application. Domain shifts can be caused, e.g., by changes in the disease prevalence (spreading or tested population), by refined RT-PCR testing procedures (way of taking samples, laboratory procedures), or by virus mutations. Therefore, machine learning models for diagnosing COVID-19 or other diseases may not be reliable and degrade in performance over time. We investigate whether domain shifts are present in COVID-19 datasets and how they affect machine learning methods. We further set out to estimate the mortality risk based on routinely acquired blood tests in a hospital setting throughout pandemics and under domain shifts. We reveal domain shifts by evaluating the models on a large-scale dataset with different assessment strategies, such as temporal validation. We present the novel finding that domain shifts strongly affect machine learning models for COVID-19 diagnosis and deteriorate their predictive performance and credibility. Therefore, frequent re-training and re-assessment are indispensable for robust models enabling clinical utility.


Asunto(s)
COVID-19 , COVID-19/diagnóstico , Prueba de COVID-19 , Pruebas Hematológicas , Humanos , Aprendizaje Automático , Reproducibilidad de los Resultados
4.
Transfusion ; 60(9): 1977-1986, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32596877

RESUMEN

BACKGROUND: The ability to predict transfusions arising during hospital admission might enable economized blood supply management and might furthermore increase patient safety by ensuring a sufficient stock of red blood cells (RBCs) for a specific patient. We therefore investigated the precision of four different machine learning-based prediction algorithms to predict transfusion, massive transfusion, and the number of transfusions in patients admitted to a hospital. STUDY DESIGN AND METHODS: This was a retrospective, observational study in three adult tertiary care hospitals in Western Australia between January 2008 and June 2017. Primary outcome measures for the classification tasks were the area under the curve for the receiver operating characteristics curve, the F1 score, and the average precision of the four machine learning algorithms used: neural networks (NNs), logistic regression (LR), random forests (RFs), and gradient boosting (GB) trees. RESULTS: Using our four predictive models, transfusion of at least 1 unit of RBCs could be predicted rather accurately (sensitivity for NN, LR, RF, and GB: 0.898, 0.894, 0.584, and 0.872, respectively; specificity: 0.958, 0.966, 0.964, 0.965). Using the four methods for prediction of massive transfusion was less successful (sensitivity for NN, LR, RF, and GB: 0.780, 0.721, 0.002, and 0.797, respectively; specificity: 0.994, 0.995, 0.993, 0.995). As a consequence, prediction of the total number of packed RBCs transfused was also rather inaccurate. CONCLUSION: This study demonstrates that the necessity for intrahospital transfusion can be forecasted reliably, however the amount of RBC units transfused during a hospital stay is more difficult to predict.


Asunto(s)
Toma de Decisiones Asistida por Computador , Hospitalización , Aprendizaje Automático , Adulto , Transfusión Sanguínea , Femenino , Humanos , Masculino , Valor Predictivo de las Pruebas , Estudios Retrospectivos , Australia Occidental
5.
Bioengineering (Basel) ; 11(6)2024 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-38927841

RESUMEN

Background/Objectives: We defined the value of a machine learning algorithm to distinguish between the EEG response to no light or any light stimulations, and between light stimulations with different brightnesses in awake volunteers with closed eyelids. This new method utilizing EEG analysis is visionary in the understanding of visual signal processing and will facilitate the deepening of our knowledge concerning anesthetic research. Methods: X-gradient boosting models were used to classify the cortical response to visual stimulation (no light vs. light stimulations and two lights with different brightnesses). For each of the two classifications, three scenarios were tested: training and prediction in all participants (all), training and prediction in one participant (individual), and training across all but one participant with prediction performed in the participant left out (one out). Results: Ninety-four Caucasian adults were included. The machine learning algorithm had a very high predictive value and accuracy in differentiating between no light and any light stimulations (AUCROCall: 0.96; accuracyall: 0.94; AUCROCindividual: 0.96 ± 0.05, accuracyindividual: 0.94 ± 0.05; AUCROConeout: 0.98 ± 0.04; accuracyoneout: 0.96 ± 0.04). The machine learning algorithm was highly predictive and accurate in distinguishing between light stimulations with different brightnesses (AUCROCall: 0.97; accuracyall: 0.91; AUCROCindividual: 0.98 ± 0.04, accuracyindividual: 0.96 ± 0.04; AUCROConeout: 0.96 ± 0.05; accuracyoneout: 0.93 ± 0.06). The predictive value and accuracy of both classification tasks was comparable between males and females. Conclusions: Machine learning algorithms could almost continuously and reliably differentiate between the cortical EEG responses to no light or light stimulations using visual evoked potentials in awake female and male volunteers with eyes closed. Our findings may open new possibilities for the use of visual evoked potentials in the clinical and intraoperative setting.

6.
J Clin Med ; 12(12)2023 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-37373805

RESUMEN

BACKGROUND: Bleeding events are frequent complications during extracorporeal membrane oxygenation therapy (ECMO). OBJECTIVE: To determine the rate of acquired factor XIII deficiency and its association with major bleeding events and transfusion requirements in adults undergoing ECMO therapy. MATERIALS AND METHODS: A retrospective single centre cohort study. Adult patients receiving veno-venous or veno-arterial ECMO therapy during a 2-year period were analysed and screened for factor XIII activity measurements. Factor XIII deficiency was defined based on the lowest factor XIII activity measured during ECMO therapy. RESULTS: Among 84 subjects included into the analysis, factor XIII deficiency occurred in 69% during ECMO therapy. There were more major bleeding events (OR, 3.37; 95% CI, 1.16-10.56; p = 0.02) and higher transfusion requirements (red blood cells, 20 vs. 12, p < 0.001; platelets, 4 vs. 2, p = 0.006) in patients with factor XIII deficiency compared to patients with normal factor XIII activity. In a multivariate regression model, factor XIII deficiency was independently associated with bleeding severity (p = 0.03). CONCLUSIONS: In this retrospective single centre study, acquired factor XIII deficiency was observed in 69% of adult ECMO patients with a high bleeding risk. Factor XIII deficiency was associated with higher rates of major bleeding events and transfusion requirements.

7.
Eur J Emerg Med ; 30(6): 408-416, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37578440

RESUMEN

AIMS: Patient admission is a decision relying on sparsely available data. This study aims to provide prediction models for discharge versus admission for ward observation or intensive care, and 30 day-mortality for patients triaged with the Manchester Triage System. METHODS: This is a single-centre, observational, retrospective cohort study from data within ten minutes of patient presentation at the interdisciplinary emergency department of the Kepler University Hospital, Linz, Austria. We trained machine learning models including Random Forests and Neural Networks individually to predict discharge versus ward observation or intensive care admission, and 30 day-mortality. For analysis of the features' relevance, we used permutation feature importance. RESULTS: A total of 58323 adult patients between 1 December 2015 and 31 August 2020 were included. Neural Networks and Random Forests predicted admission to ward observation with an AUC-ROC of 0.842 ±â€…0.00 with the most important features being age and chief complaint. For admission to intensive care, the models had an AUC-ROC of 0.819 ±â€…0.002 with the most important features being the Manchester Triage category and heart rate, and for the outcome 30 day-mortality an AUC-ROC of 0.925 ±â€…0.001. The most important features for the prediction of 30 day-mortality were age and general ward admission. CONCLUSION: Machine learning can provide prediction on discharge versus admission to general wards and intensive care and inform about risk on 30 day-mortality for patients in the emergency department.


Asunto(s)
Hospitalización , Triaje , Adulto , Humanos , Estudios Retrospectivos , Servicio de Urgencia en Hospital , Aprendizaje Automático
8.
Comput Biol Med ; 150: 106086, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36191392

RESUMEN

There have been several attempts to quantify the diagnostic distortion caused by algorithms that perform low-dimensional electrocardiogram (ECG) representation. However, there is no universally accepted quantitative measure that allows the diagnostic distortion arising from denoising, compression, and ECG beat representation algorithms to be determined. Hence, the main objective of this work was to develop a framework to enable biomedical engineers to efficiently and reliably assess diagnostic distortion resulting from ECG processing algorithms. We propose a semiautomatic framework for quantifying the diagnostic resemblance between original and denoised/reconstructed ECGs. Evaluation of the ECG must be done manually, but is kept simple and does not require medical training. In a case study, we quantified the agreement between raw and reconstructed (denoised) ECG recordings by means of kappa-based statistical tests. The proposed methodology takes into account that the observers may agree by chance alone. Consequently, for the case study, our statistical analysis reports the "true", beyond-chance agreement in contrast to other, less robust measures, such as simple percent agreement calculations. Our framework allows efficient assessment of clinically important diagnostic distortion, a potential side effect of ECG (pre-)processing algorithms. Accurate quantification of a possible diagnostic loss is critical to any subsequent ECG signal analysis, for instance, the detection of ischemic ST episodes in long-term ECG recordings.


Asunto(s)
Compresión de Datos , Procesamiento de Señales Asistido por Computador , Electrocardiografía/métodos , Algoritmos , Ingeniería Biomédica
9.
J Patient Saf ; 18(5): 494-498, 2022 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-35026794

RESUMEN

OBJECTIVES: The ability to predict in-hospital mortality from data available at hospital admission would identify patients at risk and thereby assist hospital-wide patient safety initiatives. Our aim was to use modern machine learning tools to predict in-hospital mortality from standardized data sets available at hospital admission. METHODS: This was a retrospective, observational study in 3 adult tertiary care hospitals in Western Australia between January 2008 and June 2017. Primary outcome measures were the area under the curve for the receiver operating characteristics curve, the F1 score, and the average precision of the 4 machine learning algorithms used: logistic regression, neural networks, random forests, and gradient boosting trees. RESULTS: Using our 4 predictive models, in-hospital mortality could be predicted satisfactorily (areas under the curve for neural networks, logistic regression, random forests, and gradient boosting trees: 0.932, 0.936, 0.935, and 0.935, respectively), with moderate F1 scores: 0.378, 0.367, 0.380, and 0.380, respectively. Average precision values were 0.312, 0.321, 0.334, and 0.323, respectively. It remains unknown whether additional features might improve our models; however, this would result in additional efforts for data acquisition in daily clinical practice. CONCLUSIONS: This study demonstrates that using only a limited, standardized data set in-hospital mortality can be predicted satisfactorily at the time point of hospital admission. More parameters describing patient's health are likely needed to improve our model.


Asunto(s)
Hospitalización , Aprendizaje Automático , Adulto , Mortalidad Hospitalaria , Hospitales , Humanos , Estudios Retrospectivos , Medición de Riesgo
10.
JMIR Med Inform ; 10(10): e38557, 2022 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-36269654

RESUMEN

Electronic health records (EHRs) have been successfully used in data science and machine learning projects. However, most of these data are collected for clinical use rather than for retrospective analysis. This means that researchers typically face many different issues when attempting to access and prepare the data for secondary use. We aimed to investigate how raw EHRs can be accessed and prepared in retrospective data science projects in a disciplined, effective, and efficient way. We report our experience and findings from a large-scale data science project analyzing routinely acquired retrospective data from the Kepler University Hospital in Linz, Austria. The project involved data collection from more than 150,000 patients over a period of 10 years. It included diverse data modalities, such as static demographic data, irregularly acquired laboratory test results, regularly sampled vital signs, and high-frequency physiological waveform signals. Raw medical data can be corrupted in many unexpected ways that demand thorough manual inspection and highly individualized data cleaning solutions. We present a general data preparation workflow, which was shaped in the course of our project and consists of the following 7 steps: obtain a rough overview of the available EHR data, define clinically meaningful labels for supervised learning, extract relevant data from the hospital's data warehouses, match data extracted from different sources, deidentify them, detect errors and inconsistencies therein through a careful exploratory analysis, and implement a suitable data processing pipeline in actual code. Only few of the data preparation issues encountered in our project were addressed by generic medical data preprocessing tools that have been proposed recently. Instead, highly individualized solutions for the specific data used in one's own research seem inevitable. We believe that the proposed workflow can serve as a guidance for practitioners, helping them to identify and address potential problems early and avoid some common pitfalls.

11.
Lab Med ; 52(2): 146-149, 2021 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-33340312

RESUMEN

OBJECTIVE: The diagnosis of COVID-19 is based on the detection of SARS-CoV-2 in respiratory secretions, blood, or stool. Currently, reverse transcription polymerase chain reaction (RT-PCR) is the most commonly used method to test for SARS-CoV-2. METHODS: In this retrospective cohort analysis, we evaluated whether machine learning could exclude SARS-CoV-2 infection using routinely available laboratory values. A Random Forests algorithm with 28 unique features was trained to predict the RT-PCR results. RESULTS: Out of 12,848 patients undergoing SARS-CoV-2 testing, routine blood tests were simultaneously performed in 1357 patients. The machine learning model could predict SARS-CoV-2 test results with an accuracy of 86% and an area under the receiver operating characteristic curve of 0.74. CONCLUSION: Machine learning methods can reliably predict a negative SARS-CoV-2 RT-PCR test result using standard blood tests.


Asunto(s)
COVID-19/sangre , Aprendizaje Automático , Adulto , Anciano , Anciano de 80 o más Años , Prueba de Ácido Nucleico para COVID-19 , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , SARS-CoV-2/aislamiento & purificación , Sensibilidad y Especificidad
12.
IEEE Trans Biomed Eng ; 68(10): 2997-3008, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33571084

RESUMEN

OBJECTIVE: The electrocardiogram (ECG) follows a characteristic shape, which has led to the development of several mathematical models for extracting clinically important information. Our main objective is to resolve limitations of previous approaches, that means to simultaneously cope with various noise sources, perform exact beat segmentation, and to retain diagnostically important morphological information. METHODS: We therefore propose a model that is based on Hermite and sigmoid functions combined with piecewise polynomial interpolation for exact segmentation and low-dimensional representation of individual ECG beat segments. Hermite and sigmoidal functions enable reliable extraction of important ECG waveform information while the piecewise polynomial interpolation captures noisy signal features like the baseline wander (BLW). For that we use variable projection, which allows the separation of linear and nonlinear morphological variations of the according ECG waveforms. The resulting ECG model simultaneously performs BLW cancellation, beat segmentation, and low-dimensional waveform representation. RESULTS: We demonstrate its BLW denoising and segmentation performance in two experiments, using synthetic and real data. Compared to state-of-the-art algorithms, the experiments showed less diagnostic distortion in case of denoising and a more robust delineation for the P and T wave. CONCLUSION: This work suggests a novel concept for ECG beat representation, easily adaptable to other biomedical signals with similar shape characteristics, such as blood pressure and evoked potentials. SIGNIFICANCE: Our method is able to capture linear and nonlinear wave shape changes. Therefore, it provides a novel methodology to understand the origin of morphological variations caused, for instance, by respiration, medication, and abnormalities.


Asunto(s)
Electrocardiografía , Procesamiento de Señales Asistido por Computador , Algoritmos , Arritmias Cardíacas , Humanos , Modelos Teóricos
13.
Ecol Appl ; 18(5): 1093-106, 2008 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-18686574

RESUMEN

Ranches are being converted to exurban housing developments in the southwestern United States, with potentially significant but little-studied impacts on biological diversity. We counted birds in grasslands and savannas in southeastern Arizona that were grazed by livestock, embedded in low-density exurban housing developments, or both, or neither. Species richness and bird abundance were higher in exurban neighborhoods than in undeveloped landscapes, independent of livestock grazing. The positive response to development was particularly evident among doves, quail, hummingbirds, aerial insectivores, and some but not all ground-foraging sparrows. Effects of livestock grazing were comparatively minor and mostly involved birds with requirements for tall ground cover or the lack of it. The average rank correlation between counts of individual species and housing density was positive across all transects. However, this relationship disappeared among the exurban transects alone, and bird species richness on the exurban transects was negatively correlated with the number of homes nearby. These results suggest that the positive influence of exurban development on avian abundance and variety was greatest at the lowest housing densities. We attribute the attraction of many birds to exurban development to an oasis effect, in which resources otherwise scarce in arid southwestern environments (shade, nectar, nest sites, and especially water) are relatively abundant around exurban home sites. This finding is consistent with the hypothesis that exurban home sites represented resource supply points inside birds' home ranges otherwise consisting mostly of natural vegetation.


Asunto(s)
Aves , Remodelación Urbana , Animales , Arizona , Aves/fisiología , Conducta Alimentaria , Especificidad de la Especie
14.
Ecology ; 88(5): 1322-7, 2007 May.
Artículo en Inglés | MEDLINE | ID: mdl-17536417

RESUMEN

Species richness and evenness are components of biological diversity that may or may not be correlated with one another and with patterns of species abundance. We compared these attributes among flowering plants, grasshoppers, butterflies, lizards, summer birds, winter birds, and rodents across 48 plots in the grasslands and mesquite-oak savannas of southeastern Arizona. Species richness and evenness were uncorrelated or weakly negatively correlated for each taxonomic group, supporting the conclusion that richness alone is an incomplete measure of diversity. In each case, richness was positively correlated with one or more measures of abundance. By contrast, evenness usually was negatively correlated with the abundance variables, reflecting the fact that plots with high evenness generally were those where all species present were about equally uncommon. Therefore richness, but not evenness, usually was a positive predictor of places of conservation value, if these are defined as places where species of interest are especially abundant. Species diversity was more positively correlated with evenness than with richness among grasshoppers and flowering plants, in contrast to the other taxonomic groups, and the positive correlations between richness and abundance were comparatively weak for grasshoppers and plants as well. Both of these differences can be attributed to the fact that assemblages of plants and grasshoppers were numerically dominated by small subsets of common species (grasses and certain spur-throated grasshoppers) whose abundances differed greatly among plots in ways unrelated to species richness of the groups as a whole.


Asunto(s)
Biodiversidad , Ecología , Ecosistema , Animales , Arizona , Aves/clasificación , Aves/crecimiento & desarrollo , Mariposas Diurnas/clasificación , Mariposas Diurnas/crecimiento & desarrollo , Saltamontes/clasificación , Saltamontes/crecimiento & desarrollo , Lagartos/clasificación , Lagartos/crecimiento & desarrollo , Poaceae/clasificación , Poaceae/crecimiento & desarrollo , Densidad de Población , Dinámica Poblacional , Quercus/clasificación , Quercus/crecimiento & desarrollo , Roedores/clasificación , Roedores/crecimiento & desarrollo , Especificidad de la Especie , Factores de Tiempo
15.
Am Nat ; 168(5): 660-81, 2006 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17080364

RESUMEN

Large vertebrates are strong interactors in food webs, yet they were lost from most ecosystems after the dispersal of modern humans from Africa and Eurasia. We call for restoration of missing ecological functions and evolutionary potential of lost North American megafauna using extant conspecifics and related taxa. We refer to this restoration as Pleistocene rewilding; it is conceived as carefully managed ecosystem manipulations whereby costs and benefits are objectively addressed on a case-by-case and locality-by-locality basis. Pleistocene rewilding would deliberately promote large, long-lived species over pest and weed assemblages, facilitate the persistence and ecological effectiveness of megafauna on a global scale, and broaden the underlying premise of conservation from managing extinction to encompass restoring ecological and evolutionary processes. Pleistocene rewilding can begin immediately with species such as Bolson tortoises and feral horses and continue through the coming decades with elephants and Holarctic lions. Our exemplar taxa would contribute biological, economic, and cultural benefits to North America. Owners of large tracts of private land in the central and western United States could be the first to implement this restoration. Risks of Pleistocene rewilding include the possibility of altered disease ecology and associated human health implications, as well as unexpected ecological and sociopolitical consequences of reintroductions. Establishment of programs to monitor suites of species interactions and their consequences for biodiversity and ecosystem health will be a significant challenge. Secure fencing would be a major economic cost, and social challenges will include acceptance of predation as an overriding natural process and the incorporation of pre-Columbian ecological frameworks into conservation strategies.


Asunto(s)
Conservación de los Recursos Naturales/economía , Conservación de los Recursos Naturales/métodos , Ecosistema , Cadena Alimentaria , Vertebrados , Animales , América del Norte , Especificidad de la Especie
16.
Oecologia ; 78(3): 430-431, 1989 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28312593

RESUMEN

Grasshopper densities were compared between grazed and ungrazed semidesert grassland sites in southeastern Arizona. Bouteloua-dominated perennial grass cover was about 1.5 times greater on the livestock exclosure. Grasshoppers were 3.7 times more abundant on the protected area in the summers of 1983 and 1984, when dominant species were grass-feeding members of the subfamily Gomphocerinae. In fall 1984, grasshoppers were 3.8 times more common on the grazed site, when dominants were mainly herb-feeders in the subfamily Melanoplinae. These results indicate important seasonal and taxonomic differences in the responses of grasshoppers to the activities of vertebrate grazers.

17.
Conserv Biol ; 20(4): 1242-50, 2006 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-16922240

RESUMEN

Ranches are being converted to exurban housing developments in the southwestern United States, with potentially significant but little-studied impacts on biological diversity. We captured rodents on 48 traplines in grasslands, mesquite savannas, and oak savannas in southeastern Arizona that were grazed by livestock, embedded in exurban housing developments, grazed and embedded in development, or neither grazed nor embedded in development. Independent of habitat or development, rodent species richness, mean rank abundance, and capture rates of all rodents combined were negatively related to presence of livestock grazing or to its effects on vegetative ground cover Exurban development had no obvious effects on rodent variety or abundance. Results suggest southwester.n exurban developments can sustain a rich assemblage of grassland and savanna rodents if housing densities are low and houses are embedded in a matrix of natural vegetation with little grazing.


Asunto(s)
Biodiversidad , Vivienda , Roedores/clasificación , Animales , Arizona , Bovinos/fisiología , Conservación de los Recursos Naturales , Conducta Alimentaria , Geografía , Poaceae , Densidad de Población , Dinámica Poblacional
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA