Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 65
Filter
1.
Environ Res Lett ; 16: 1-14, 2021 Jan 21.
Article in English | MEDLINE | ID: mdl-35069797

ABSTRACT

Comprehensive sampling of the carbonate system in estuaries and coastal waters can be difficult and expensive because of the complex and heterogeneous nature of near-shore environments. We show that sample collection by community science programs is a viable strategy for expanding estuarine carbonate system monitoring and prioritizing regions for more targeted assessment. 'Shell Day' was a single-day regional water monitoring event coordinating coastal carbonate chemistry observations by 59 community science programs and seven research institutions in the northeastern United States, in which 410 total alkalinity (TA) samples from 86 stations were collected. Field replicates collected at both low and high tides had a mean standard deviation between replicates of 3.6 ± 0.3 µmol kg-1 (σ mean ± SE, n = 145) or 0.20 ± 0.02%. This level of precision demonstrates that with adequate protocols for sample collection, handling, storage, and analysis, community science programs are able to collect TA samples leading to high-quality analyses and data. Despite correlations between salinity, temperature, and TA observed at multiple spatial scales, empirical predictions of TA had relatively high root mean square error >48 µmol kg-1. Additionally, ten stations displayed tidal variability in TA that was not likely driven by low TA freshwater inputs. As such, TA cannot be predicted accurately from salinity using a single relationship across the northeastern US region, though predictions may be viable at more localized scales where consistent freshwater and seawater endmembers can be defined. There was a high degree of geographic heterogeneity in both mean and tidal variability in TA, and this single-day snapshot sampling identified three patterns driving variation in TA, with certain locations exhibiting increased risk of acidification. The success of Shell Day implies that similar community science based events could be conducted in other regions to not only expand understanding of the coastal carbonate system, but also provide a way to inventory monitoring assets, build partnerships with stakeholders, and expand education and outreach to a broader constituency.

2.
Plant Dis ; 102(9): 1748-1758, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30125211

ABSTRACT

Current management of sudden death syndrome (SDS) of soybean, caused by Fusarium virguliforme, focuses on planting resistant varieties and improving soil drainage; however, these measures are not completely effective. A 6-year study evaluated the effects of cropping system diversification on SDS and soybean yield. SDS, root health, yield, and F. virguliforme density in soil were assessed in a naturally infested field trial comparing a 2-year cropping system consisting of a corn-soybean rotation and synthetic fertilizer applications with 3- and 4-year cropping systems consisting of corn-soybean-oat + red clover and corn-soybean-oat +alfalfa-alfalfa rotations, respectively, with both manure and low synthetic fertilizer rates. In 5 of 6 years, SDS incidence and severity were lower and yield higher in the 3- and 4-year systems than in the 2-year system. SDS severity and incidence were up to 17-fold lower in the diversified systems than in the 2-year system. Incidence and severity of SDS explained 45 to 87% of the variation in yield. Plants in the 2-year system generally showed more severe root rot and lower plant weights than plants in the diversified systems. F. virguliforme density in soil was up to fivefold greater in the 2-year system compared with the 4-year system. The processes responsible for the suppression of SDS and yield protection in the diversified cropping systems still need to be determined.


Subject(s)
Fusarium/growth & development , Glycine max/microbiology , Plant Diseases/prevention & control , Agriculture , Fusarium/pathogenicity , Incidence , Plant Diseases/microbiology , Plant Diseases/statistics & numerical data
3.
Weed Res ; 58(4): 250-258, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30069065

ABSTRACT

Weedy plants pose a major threat to food security, biodiversity, ecosystem services and consequently to human health and wellbeing. However, many currently used weed management approaches are increasingly unsustainable. To address this knowledge and practice gap, in June 2014, 35 weed and invasion ecologists, weed scientists, evolutionary biologists and social scientists convened a workshop to explore current and future perspectives and approaches in weed ecology and management. A horizon scanning exercise ranked a list of 124 pre-submitted questions to identify a priority list of 30 questions. These questions are discussed under seven themed headings that represent areas for renewed and emerging focus for the disciplines of weed research and practice. The themed areas considered the need for transdisciplinarity, increased adoption of integrated weed management and agroecological approaches, better understanding of weed evolution, climate change, weed invasiveness and finally, disciplinary challenges for weed science. Almost all the challenges identified rested on the need for continued efforts to diversify and integrate agroecological, socio-economic and technological approaches in weed management. These challenges are not newly conceived, though their continued prominence as research priorities highlights an ongoing intransigence that must be addressed through a more system-oriented and transdisciplinary research agenda that seeks an embedded integration of public and private research approaches. This horizon scanning exercise thus set out the building blocks needed for future weed management research and practice; however, the challenge ahead is to identify effective ways in which sufficient research and implementation efforts can be directed towards these needs.

4.
Water Sci Technol ; 64(1): 239-46, 2011.
Article in English | MEDLINE | ID: mdl-22053481

ABSTRACT

Currently more than 3 billion people live in urban areas. The urban population is predicted to increase by a further 3 billion by 2050. Rising oil prices, unreliable rainfall and natural disasters have all contributed to a rise in global food prices. Food security is becoming an increasingly important issue for many nations. There is also a growing awareness of both 'food miles' and 'virtual water'. Food miles and virtual water are concepts that describe the amount of embodied energy and water that is inherent in the food and other goods we consume. Growing urban agglomerations have been widely shown to consume vast quantities of energy and water whilst emitting harmful quantities of wastewater and stormwater runoff through the creation of massive impervious areas. In this paper it is proposed that there is an efficient way of simultaneously addressing the problems of food security, carbon emissions and stormwater pollution. Through a case study we demonstrate how it is possible to harvest and store stormwater from densely populated urban areas and use it to produce food at relatively low costs. This reduces food miles (carbon emissions) and virtual water consumption and serves to highlight the need for more sustainable land-use planning.


Subject(s)
Agriculture/methods , Conservation of Natural Resources , Rain , Water Supply , Carbon Footprint , Cities , Food Supply , Models, Theoretical , Urbanization , Victoria , Water Movements , Water Pollution/prevention & control
5.
Water Sci Technol ; 53(2): 289-301, 2006.
Article in English | MEDLINE | ID: mdl-16594348

ABSTRACT

Surplus nitrogen (N) in ground and surface water is of concern in intensive agricultural regions. Surplus N leaches during lengthy periods where annual crop systems are used in temperate regions. This paper presents a model to estimate the surplus N available for leaching to ground water beneath agricultural systems and applies the model to watersheds in an intensive maize and soybean production system. The model utilizes commonly available georeferenced data on soils, crops, and livestock, making it applicable to watersheds in many regions. The model links stocks of N in soil, crops, livestock, fertilizer and the atmosphere. Nitrogen flow centers on exchange between the soil N stocks. Nitrogen mineralization rates are defined for three soil organic matter pools, crop residue, and manure based on carbon:N ratios. Nitrogen exports from the system are harvested crops, livestock and losses to the atmosphere. Application of the model in 26 Iowa watersheds finds surpluses of 18 to 43 kg-N/ha. Surpluses exceeded measured annual nitrate-N loads in regional streams by amounts equivalent to denitrification rates in groundwater. Deficits in soil N were sufficiently small to suggest that the system is in equilibrium with soils of the region.


Subject(s)
Agriculture/methods , Nitrogen/analysis , Animals , Carbon/analysis , Environmental Monitoring , Fertilizers , Hydrogen-Ion Concentration , Soil , Soil Pollutants , Water Movements , Water Pollutants/analysis , Water Pollution , Water Supply
6.
Crop Sci ; 44(3): 861-869, 2004.
Article in English | MEDLINE | ID: mdl-17047728

ABSTRACT

Because of expanding markets for high-value niche crops, opportunities have increased for the production of medicinal herbs in the USA. An experiment was conducted in 2001 and 2002 near Gilbert, IA, to study crop performance, weed suppression, and environmental conditions associated with the use of several organic mulches in the production of two herbs, catnip (Nepeta cataria L.) and St. John's wort (Hypericum perforatum L. 'Helos'). Treatments were arranged in a completely randomized design and included a positive (hand-weeded) control, a negative (nonweeded) control, oat straw, a flax straw mat, and a nonwoven wool mat. Catnip plant height was significantly greater in the oat straw than the other treatments at 4 wk through 6 wk in 2001; at 4 to 8 wk in 2002, catnip plant height and width was significantly lower in the negative control compared with the other treatments. Catnip yield was significantly higher in the flax straw mat than all other treatments in 2001. In 2002, St. John's wort yields were not statistically different in any treatments. All weed management treatments had significantly fewer weeds than the non-weeded rows in 2002. Total weed density comparisons in each crop from 2 yr showed fewer weeds present in the flax straw and wool mat treatments compared with positive control plots. There was no significant weed management treatment effect on the concentration of the target compounds, nepetalactone in catnip and pseudohypericin-hypericin in St. John's wort, although there was a trend toward higher concentrations in the flax straw treatment.

7.
Int J Obes Relat Metab Disord ; 27(6): 684-92, 2003 Jun.
Article in English | MEDLINE | ID: mdl-12833112

ABSTRACT

OBJECTIVE: To assess the relation between body mass index (BMI) levels and various lifestyle variables related to physical activity and specific characteristics of a healthy eating pattern, using baseline cross-sectional data from the Wellness IN the Rockies project. SUBJECTS: A total of 928 males and 889 females, aged 18-99 y, recruited from six rural communities in Wyoming, Montana, and Idaho. MEASUREMENTS: Using BMI as the criterion, overweight was defined as a BMI >or=25 kg/m(2) and obesity was defined as a BMI >or=30 kg/m(2). All participants in this study completed a questionnaire that elicited sociodemographic information, self-reported height and weight, and data related to specific dietary intakes, eating-related behaviors, and physical activity behaviors and perceptions. RESULTS: Prevalence of overweight was 70% in men and 59% in women. Increased likelihood of overweight or obesity was associated with greater frequency of the following: drinking sweetened beverages such as soft drinks/soda pop, ordering supersized portions, eating while doing other activities, and watching television. Other predictors were lower frequency of participation in physical activity and the perception of not getting as much exercise as needed. CONCLUSIONS: The increased probability of having a high BMI in individuals who more often eat while doing another activity appears to be a novel finding that will need to be substantiated by additional research. The finding that the vast majority of overweight and obese respondents believed that they do not get as much exercise as needed strengthens the assertion that finding ways to increase participation in physical activity should remain a high priority in obesity prevention and intervention efforts at the community and individual levels.


Subject(s)
Body Mass Index , Eating , Feeding Behavior , Obesity/epidemiology , Adolescent , Adult , Aged , Aged, 80 and over , Cross-Sectional Studies , Diet , Exercise , Female , Humans , Idaho/epidemiology , Life Style , Male , Middle Aged , Montana/epidemiology , Prevalence , Rural Population , Wyoming/epidemiology
8.
Int J Sports Med ; 23(8): 544-8, 2002 Nov.
Article in English | MEDLINE | ID: mdl-12439768

ABSTRACT

The purpose of this study was to examine the effects of a 6 week high-intensity interval training (HIT) program, followed by 2 weeks recovery, on iron status in cyclists. Eleven male collegiate cyclists (21.8 +/- 0.8 yr, 71.4 +/- 2.2 kg, and 8.6 +/- 0.9 % body fat) participated in a 6 week cycle training program that consisted of 5 days of high intensity interval and endurance training per week. Hematocrit (Hct), hemoglobin (Hb), red blood cell (RBC) count, serum iron, serum ferritin, and total iron binding capacity (TIBC) were analyzed from venous blood samples taken at baseline (B), and each week following interval training (T1-T6) and recovery (R1-R2). Dietary intakes including iron were monitored weekly. The dependent variables were analyzed by repeated measures ANOVA (p < 0.05). RBC count, Hb and Hct were significantly decreased compared to baseline at T3. Serum iron did not change significantly. Serum ferritin decreased significantly from 55.9 +/- 9.7 at B to 42.2 +/- 8.0 ng x ml -1 at T5 and remained depressed at T6, R1 and R2. TIBC was significantly increased above baseline at T3, T4, T6, R1 and R2. These results suggest that 6 weeks of high-intensity interval training can reduce iron stores. It is possible that this reduction in iron stores over time could adversely affect aerobic cycling performance.


Subject(s)
Bicycling/physiology , Exercise/physiology , Iron/blood , Physical Education and Training/methods , Adult , Erythrocyte Count , Ferritins/blood , Hematocrit , Hemoglobins/analysis , Humans , Male , Plasma Volume/physiology
9.
Ann Emerg Med ; 38(6): 633-8, 2001 Dec.
Article in English | MEDLINE | ID: mdl-11719741

ABSTRACT

BACKGROUND: In a landmark hypothesis-generating study, Todd et al found that a difference of approximately 13 mm (95% confidence interval [CI] 10 to 17 mm) on a visual analog scale (VAS) represented the minimum change in acute pain that was clinically significant in a cohort of trauma patients. STUDY OBJECTIVE: We test the hypothesis that the minimum clinically significant change in pain as measured by the VAS in an independent, more heterogeneous validation cohort is approximately 13 mm. METHODS: This was a prospective, observational cohort study of adults presenting to 2 urban emergency departments with pain. At 30-minute intervals during a 2-hour period, patients marked a VAS and were asked if their pain was "much less," "a little less," "about the same," "a little more," or "much more." All data were obtained without reference to prior VAS scores. The minimum clinically significant change in pain was defined a priori as the difference in millimeters between the current and immediately preceding VAS scores when "a little more" or "a little less pain" was reported. RESULTS: Ninety-six patients enrolled in the study, providing 332 paired pain measurements. There were 141 paired measurements designated by patients as "a little less" or "a little more" pain. The mean clinically significant difference between consecutive ratings of pain in the combined "little less" or "little more" groups was 13 mm (95% CI 10 to 16 mm). The difference between this finding and that of Todd et al was 0 mm (95% CI -4 to 4 mm). CONCLUSION: These data are virtually identical to previous findings indicating that a difference of 13 mm on a VAS represents, on average, the minimum change in acute pain that is clinically significant.


Subject(s)
Pain Measurement/statistics & numerical data , Pain/classification , Adolescent , Adult , Aged , Cohort Studies , Emergency Service, Hospital , Female , Hospitals, Urban , Humans , Male , Middle Aged , Pain/diagnosis , Prospective Studies , Reproducibility of Results
10.
Breast Cancer Res ; 3(5): 336-41, 2001.
Article in English | MEDLINE | ID: mdl-11597324

ABSTRACT

BACKGROUND: Current methodology often cannot distinguish second primary breast cancers from multifocal disease, a potentially important distinction for clinical management. In the present study we evaluated the use of oligonucleotide-based microarray analysis in determining the clonality of tumors by comparing gene expression profiles. METHOD: Total RNA was extracted from two tumors with no apparent physical connection that were located in the right breast of an 87-year-old woman diagnosed with invasive ductal carcinoma (IDC). The RNA was hybridized to the Affymetrix Human Genome U95A Gene Chip (12,500 known human genes) and analyzed using the Gene Chip Analysis Suite 3.3 (Affymetrix, Inc, Santa Clara, CA, USA) and JMPIN 3.2.6 (SAS Institute, Inc, Cary, NC, USA). Gene expression profiles of tumors from five additional patients were compared in order to evaluate the heterogeneity in gene expression between tumors with similar clinical characteristics. RESULTS: The adjacent breast tumors had a pairwise correlation coefficient of 0.987, and were essentially indistinguishable by microarray analysis. Analysis of gene expression profiles from different individuals, however, generated a pairwise correlation coefficient of 0.710. CONCLUSION: Transcriptional profiling may be a useful diagnostic tool for determining tumor clonality and heterogeneity, and may ultimately impact on therapeutic decision making.


Subject(s)
Breast Neoplasms/pathology , Carcinoma, Ductal, Breast/pathology , Neoplasms, Second Primary/pathology , RNA, Neoplasm/genetics , Aged , Aged, 80 and over , Breast Neoplasms/genetics , Carcinoma, Ductal, Breast/genetics , Diagnosis, Differential , Female , Gene Expression Profiling , Humans , Neoplasms, Second Primary/genetics , Oligonucleotide Array Sequence Analysis
11.
Pediatr Emerg Care ; 17(1): 47-51, 2001 Feb.
Article in English | MEDLINE | ID: mdl-11265909

ABSTRACT

OBJECTIVE: The aims of the study were to determine the following: 1) if a fever education program (interactive or written) reduces parent fever anxiety; 2) if an interactive fever program was more effective as a teaching style than standard written material alone; and 3) if a fever program increases parent fever home management and reduces return emergency department (ED) visits. METHOD: A quasiexperimental, pretest and post-test pilot study examining parental fever anxiety was conducted at The Children's Hospital of Philadelphia. Eligible participants consisted of 87 parents and their children, aged 3 months to 5 years presenting with fever >38.4 degrees C, and without coexisting serious illness. RESULTS: Both the interactive fever education program and the standard written fever pamphlet were equally effective as teaching methods. Data revealed a 30% reduction in fever anxiety rated as moderate-severe on arrival to none-low post-fever education, increased parent fever home management skills with correct use of thermometer and antipyretics, and reduced unnecessary return ED visits. CONCLUSION: Parents in the acute and nonacute care setting may benefit from an interactive fever education program that includes the definition and benefit of fever, the correct use of a thermometer, fever home management skills, and appropriate fever telephone follow-up.


Subject(s)
Anxiety/prevention & control , Fever/diagnosis , Fever/therapy , Health Education/methods , Home Nursing/education , Home Nursing/methods , Parents/education , Parents/psychology , Teaching/methods , Adult , Anxiety/diagnosis , Anxiety/psychology , Attitude to Health , Child, Preschool , Emergency Nursing , Emergency Service, Hospital/statistics & numerical data , Follow-Up Studies , Health Knowledge, Attitudes, Practice , Hospitals, Pediatric , Humans , Infant , Nursing Evaluation Research , Pamphlets , Philadelphia , Pilot Projects , Program Evaluation , Teaching Materials
12.
Appetite ; 36(1): 51-6, 2001 Feb.
Article in English | MEDLINE | ID: mdl-11161345

ABSTRACT

The primary objectives were to assess dietary fat reduction/avoidance behaviors within a sample of college students, and to assess the strength of the relationship between self reported fat avoidance and a number of variables including body mass index (BMI), self-esteem, and responses to the Eating Disorder Inventory (EDI) and Eating Attitudes Test (EAT). A total of 210 female and 114 male undergraduate students were administered a food habits questionnaire (which assessed four dietary fat reduction behaviors), the EDI, the dieting subscale of the EAT, and the Coopersmith Self-Esteem Inventory. Measured heights and weights were used to compute BMI. Thirty-eight percent of the females and 13% of the males reported that they had dieted with the express purpose of losing weight in the past 12 months. The finding that females in general and female dieters in particular, scored higher on the EAT dieting subscale, and relied on three of the four dietary fat reduction behaviors to a greater extent than did males, supports the assertion that women rely heavily on dietary fat avoidance as a method to reduce caloric intakes. In females, the finding that a greater degree of fat avoidance was associated with significantly lower levels of self-esteem and higher scores on the EAT and on six of the eight EDI subscales suggested that fat avoidance may be a predictor of eating pathology and/or psychosocial problems in college-aged women.


Subject(s)
Diet, Reducing/psychology , Dietary Fats/administration & dosage , Feeding and Eating Disorders/psychology , Students/psychology , Weight Loss , Adolescent , Adult , Body Mass Index , Diet, Reducing/statistics & numerical data , Feeding Behavior , Female , Humans , Male , Middle Aged , Self Concept , Surveys and Questionnaires
13.
J Urol ; 163(5): 1565-9, 2000 May.
Article in English | MEDLINE | ID: mdl-10751889

ABSTRACT

PURPOSE: Urinary oxalate is a primary determinant of the level of calcium oxalate saturation and the formation of calcium oxalate crystals, a key event in kidney stone formation. The primary objective of this study was to compare the effects of calcium carbonate and magnesium oxide on oxalate absorption. MATERIALS AND METHODS: An experimental model was used that allowed differentiation between endogenously and oxalate load-derived urinary oxalate. Twenty-four healthy subjects (10 males, 14 females) participated in three oxalate load (OL) tests: control (OL alone), calcium carbonate (OL with concomitant calcium carbonate ingestion), and magnesium oxide (OL with concomitant magnesium oxide ingestion). Oxalate loads consisted of 180 mg. unlabeled and 18 mg. 1,2[13C2] oxalic acid. Timed urine samples were collected after the OL for analysis of oxalate, calcium, magnesium, and creatinine. RESULTS: Both the calcium carbonate and magnesium oxide treatments were associated with significantly lower load-derived oxalate levels at all time points within the initial 24-hour post-oxalate ingestion period compared with levels observed for the control treatment. There were no treatment effects on endogenous oxalate levels. The efficiency of oxalate absorption for the calcium carbonate (5.1%) and magnesium oxide (7.6%) treatments was significantly lower than that for the control treatment (13.5%). CONCLUSIONS: The results suggested that magnesium was nearly as effective as calcium in reducing oxalate absorption and urinary excretion. Higher levels of urinary oxalate, calcium, and magnesium in males appeared to be largely a function of body size since gender differences either disappeared or were reversed when a correction was made for urinary creatinine excretion.


Subject(s)
Calcium Carbonate/pharmacology , Magnesium Oxide/pharmacology , Oxalates/urine , Adult , Female , Humans , Male , Oxalates/administration & dosage , Sex Characteristics
14.
Am Surg ; 65(9): 863-4, 1999 Sep.
Article in English | MEDLINE | ID: mdl-10484090

ABSTRACT

Dog bite injuries in children are a preventable health problem. To characterize this type of injury, we have undertaken to define demographic criteria and patterns of injury inflicted by dogs in our pediatric population. A retrospective chart review was conducted of pediatric patients with dog bite injuries admitted to a Level I pediatric trauma center from January 1986 through June 1998. Patient demographics, canine characteristics, and hospital patient data were collected and analyzed using the Excel program and appropriate statistical methodology. There were 67 patient records reviewed. Thirty-eight (57%) of the patients were male, and 29 (43%) were female. There were 43 (64%) white children, 22 (33%) African-American children, and 2 (3%) Hispanic children. The average age of the children was 6.2 +/- 4.2 years, with an average weight of 23.3 +/- 13.7 kg. More than half the attacks occurred in the afternoon and 55 per cent of these attacks were documented as "unprovoked" attacks. Thirty-one (46%) of these attacks involved family pets, and 30 (45%) dogs were known to the attacked child. The head and neck was involved in greater than 67 per cent of these injuries. Pit bulls caused 25 per cent of the bite injuries. Large dogs were responsible for 88 per cent of the attacks. Forty-four (66%) patients required operative intervention. Twenty-eight of these patients had multiple anatomical areas injured. There were 44 procedures involving the head and neck, 21 involving extremities, and 6 involving other areas of the body. All patients 5 years of age and under had head and neck injuries. Dog bite injuries requiring admission occur more in male children. Caucasian and African American children were the majority of children affected. The children under 5 years of age suffered the most devastating injuries. More than half of these attacks were not provoked. More than two-thirds of the injuries to these children involved the head and neck. We conclude that effective prevention strategies must stress careful supervision of young children and the family or neighbor's dog, a scenario that may easily lead to complacency and set the stage for a severe injury.


Subject(s)
Bites and Stings/epidemiology , Dogs , Adolescent , Age Distribution , Animals , Child , Child, Preschool , Female , Hospitalization/statistics & numerical data , Humans , Infant , Male , Philadelphia/epidemiology , Retrospective Studies , Sex Distribution , Urban Population/statistics & numerical data
15.
Psychiatry Res ; 86(2): 163-73, 1999 May 31.
Article in English | MEDLINE | ID: mdl-10397418

ABSTRACT

Although aggression research in general has been hampered by a lack of objective measurements of aggressive acts, two types of aggressive acts, impulsive vs. premeditated, have been studied extensively in recent years. These two types of aggression have been primarily measured by structured or semi-structured interviews. The current study was designed to assess the construct validity of these two types of aggression using a self-report questionnaire which included items gleaned from the content of interviews used in past studies. For this study, 216 college students assessed their own aggressive acts rather than answering general questions about aggression. The students were not significantly different from normative sample groups on self-report measures of impulsiveness, aggression, and anger/hostility. A PCA factor analysis with a promax rotation of the items on the self-report questionnaire identified four factors: impulsive aggression; mood on the day the act occurred; premeditated aggression; and agitation. Thus, impulsive and premeditated aggression are independent constructs which exist in varying degrees among these 'normal' persons in a non-clinical sample. Impulsive aggression was characterized in part by feelings of remorse following the acts and by thought confusion. Premeditated aggression was related to social gain and dominance.


Subject(s)
Aggression/classification , Impulsive Behavior/classification , Self Disclosure , Violence/classification , Adult , Aggression/psychology , Factor Analysis, Statistical , Female , Humans , Male , Models, Psychological , Models, Statistical , Personality Inventory , Surveys and Questionnaires
17.
Cutis ; 62(1): 41-3, 1998 Jul.
Article in English | MEDLINE | ID: mdl-9675532

ABSTRACT

Felodipine is a calcium channel blocking agent used in the management of hypertension and angina. We report a case of gingival hyperplasia in a patient with chronic use of this drug. Gingival changes occurred soon after initiation of felodipine and improved upon its discontinuation. The clinical characteristics, inciting agents, proposed pathogenetic mechanisms, as well as prevention and treatment of drug-induced gingival hyperplasia are briefly reviewed.


Subject(s)
Calcium Channel Blockers/adverse effects , Felodipine/adverse effects , Gingival Hyperplasia/chemically induced , Aged , Female , Humans
18.
Cell Mol Biol (Noisy-le-grand) ; 44(1): 211-7, 1998 Feb.
Article in English | MEDLINE | ID: mdl-9551652

ABSTRACT

A new method for infrared analysis of tissues and cells is presented. The method is based on Fourier transform infrared microspectroscopy coupled with attenuated total reflectance. The technique allows spectroscopic measurements on the same samples used by pathologists for histopathological evaluation, e.g. stained samples on plain glass slides. Since the same specimen can be used as for histopathology, the method does not require sample preparation or modification. Significantly, the sample is not damaged. Glass absorbs in the infrared and thus has not been used previously in infrared analysis of tissues and cells. Conventional infrared techniques utilize expensive substrates, such as BaF2 windows and gold coated slides which do not absorb infrared radiation. However, these measurements require special preparation and result in the destruction of the sample. Breast cancer tissues were examined to demonstrate the feasibility and reproducibility of the new method. Linear discriminant analysis was used to discriminate and classify three types of cells: benign, atypical hyperplasia and malignant. It was demonstrated that benign vs. malignant cells were discriminated with 100% accuracy, benign vs. atypical hyperplasia were discriminated with 100% accuracy and malignant vs. atypical hyperplasia were discriminated with an accuracy of 90% and higher.


Subject(s)
Breast Neoplasms/pathology , Carcinoma, Ductal, Breast/pathology , Spectroscopy, Fourier Transform Infrared/methods , Breast Neoplasms/classification , Breast Neoplasms/metabolism , Carcinoma, Ductal, Breast/classification , Carcinoma, Ductal, Breast/metabolism , Discriminant Analysis , Female , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL