Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
BMC Med Educ ; 23(1): 137, 2023 Mar 01.
Article in English | MEDLINE | ID: mdl-36859253

ABSTRACT

BACKGROUND: Morning rounds by an acute care surgery (ACS) service at a level one trauma center are uniquely demanding, given the fast pace, high acuity, and increased patient volume. These demands notwithstanding, communication remains integral to the success of surgical teams. Yet there are limited published curricula that address trauma inpatient communication needs. Observations at our institution confirmed that the surgical team lacked a shared mental model for communication. We hypothesized that creating a relationship-centered rounding conceptual framework model would enhance the provider-patient experience. STUDY DESIGN: A mixed-methods approach was used for this study. A multi-pronged needs assessment was conducted. Provider communion items for Press Ganey and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys were used to measure patients' expressed needs. Faculty with experience in relationship-centered communication observed morning rounds and documented demonstrated behaviors. A five-hour workshop was designed based on the identified needs. A pre-and post-course Assessment and course evaluation were conducted. Provider-related patient satisfaction items were measured six months before the course and six months after the workshop. RESULTS: Needs assessment revealed a lack of a shared communication framework and a lack of leadership skills for senior trauma residents. Barriers included: time constraints, patient load, and interruptions during rounds. The curriculum was very well received. The self-reflected behaviors that demonstrated the most dramatic change between the pre and post-workshop surveys were: I listened without interrupting; I spoke clearly and at a moderate pace; I repeated key points; and I checked that the patient understood. All these changed from being performed by 50% of respondents "about half of the time" to 100% of them "always". Press Ganey top box likelihood to recommend (LTR) and provider-related top box items showed a trend towards improvement after implementing the training with a percentage difference of up to 20%. CONCLUSION: The Inpatient Relationship Centered Communication Curriculum (I-RCCC) targeting senior residents and Nurse Practitioners (NP) was feasible, practical, and well-received by participants. There was a trend of an increase in LTRs and provider-specific patient satisfaction items. This curriculum will be refined based on the study results and potentially scalable to other surgical specialties.


Subject(s)
Curriculum , Inpatients , Humans , Communication , Critical Care , Faculty
2.
Prehosp Emerg Care ; 23(2): 195-200, 2019.
Article in English | MEDLINE | ID: mdl-30118372

ABSTRACT

BACKGROUND: Use of prehospital stroke scales may enhance stroke detection and improve treatment rates and delays. Current scales, however, may lack detection accuracy. As such, we examined whether adding coordination (Balance) and diplopia (Eyes) assessments increase the accuracy of the Face-Arms-Speech-Time (FAST) scale in a multisite prospective study of emergency response activations for presumed stroke. METHODS: This was a prospective study of emergency response activations for presumed stroke in Santa Clara County, California. Emergency medical responders were trained in the Balance-Eyes-Face-Arms-Speech-Time (BEFAST) scale and administered the scale on scene to all patients who were within 6 hours of onset of neurological symptoms. Each patient's final diagnosis (stroke vs. no stroke) was based on review of hospital records. We compared the performance of the BEFAST and FAST scales for stroke detection. RESULTS: Three hundred fifty-nine patients were included in our analysis. Compared to nonstroke patients (n = 200), stroke patients (n = 159) more often scored positive on each of the five elements of the BEFAST scale (p < 0.05 for each). In multivariable analysis, only facial droop and arm weakness were independent predictors of stroke (p < 0.05). BEFAST and FAST scale accuracy for stroke identification was comparable (area under the curve [AUC] = 0.70 vs. AUC = 0.69, p = 0.36). Optimal cutoff for stroke detection was ≥1 for both scales. At this threshold, the positive predictive value (PPV) was 0.49 for the BEFAST and 0.53 for the FAST scale, and the negative predictive value (NPV) was 0.93 for BEFAST and 0.86 for FAST. CONCLUSION: Adding coordination and diplopia assessments to face, arm, and speech assessment does not improve stroke detection in the prehospital setting.


Subject(s)
Emergency Medical Services , Stroke/diagnosis , Aged , Area Under Curve , Arm , California , Female , Humans , Male , Middle Aged , Motor Activity , Physical Examination , Postural Balance , Predictive Value of Tests , Prognosis , Prospective Studies , Speech , Vision, Ocular
4.
J Neurosci ; 35(13): 5180-6, 2015 Apr 01.
Article in English | MEDLINE | ID: mdl-25834044

ABSTRACT

It remains unclear how single neurons in the human brain represent whole-object visual stimuli. While recordings in both human and nonhuman primates have shown distributed representations of objects (many neurons encoding multiple objects), recordings of single neurons in the human medial temporal lobe, taken as subjects' discriminated objects during multiple presentations, have shown gnostic representations (single neurons encoding one object). Because some studies suggest that repeated viewing may enhance neural selectivity for objects, we had human subjects discriminate objects in a single, more naturalistic viewing session. We found that, across 432 well isolated neurons recorded in the hippocampus and amygdala, the average fraction of objects encoded was 26%. We also found that more neurons encoded several objects versus only one object in the hippocampus (28 vs 18%, p < 0.001) and in the amygdala (30 vs 19%, p < 0.001). Thus, during realistic viewing experiences, typical neurons in the human medial temporal lobe code for a considerable range of objects, across multiple semantic categories.


Subject(s)
Amygdala/cytology , Amygdala/physiology , Hippocampus/cytology , Hippocampus/physiology , Neurons/physiology , Visual Perception/physiology , Action Potentials/physiology , Adult , Female , Humans , Male , Middle Aged , Models, Neurological , Photic Stimulation , Young Adult
5.
Pediatr Emerg Care ; 32(12): 856-862, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27749795

ABSTRACT

OBJECTIVE: Scorpion antivenom was recently approved for use in patients with clinically significant scorpion envenomation in the United States; no formal economic analysis on its impact on cost of management has been performed. METHODS: Three different strategies of management of scorpion envenomation with systemic neurotoxic symptoms in children were compared for cost minimization from a societal perspective. In strategy I, patients were managed with supportive care only without antivenom. In strategy II, an aggressive strategy of full-dose antivenom (initial dose of 3 vials with the use of additional vials administered 1 vial at a time) was considered. In strategy III, a single-vial serial antivenom dosing strategy titrated to clinical response was considered. Clinical probabilities for the different strategies were obtained from retrospective review of medical records of patients with scorpion envenomation over a 10-year period at our institution. Baseline cost values were obtained from patient reimbursement data from our institution. RESULTS: In baseline analysis, strategy I of supportive care only with no antivenom was least costly at US $3466.50/patient. Strategy III of single-vial serial dosing was intermediate but less expensive than strategy II of full-dose antivenom, with an incremental cost of US $3171.08 per patient. In a 1-way sensitivity analysis, at a threshold antivenom cost of US $1577.87, strategy III of single-vial serial dosing became the least costly strategy. CONCLUSIONS: For children with scorpion envenomation, use of a management strategy based on serial dosing of antivenom titrated to clinical response is less costly than a strategy of initial use of full-dose antivenom.


Subject(s)
Antivenins/administration & dosage , Antivenins/economics , Scorpion Stings/drug therapy , Scorpion Stings/economics , Adolescent , Child , Child, Preschool , Cost-Benefit Analysis/methods , Decision Support Techniques , Disease Management , Drug Administration Schedule , Humans , Infant , Retrospective Studies , Scorpion Venoms/antagonists & inhibitors , Treatment Outcome , United States
6.
Pediatr Emerg Care ; 31(5): 339-42, 2015 May.
Article in English | MEDLINE | ID: mdl-25875993

ABSTRACT

OBJECTIVE: Effective physician-patient communication is critical to the clinical decision-making process. We studied parental recall of information provided during an informed consent discussion process before performance of emergency medical procedures in a pediatric emergency department of an inner-city hospital with a large bilingual population. METHODS: Fifty-five parent/child dyads undergoing emergency medical procedures were surveyed prospectively in English/Spanish postprocedure for recall of informed consent information. Exact logistic regression was used to predict the ability to name a risk, benefit, and alternative to the procedure based on a parent's language, education, and acculturation. RESULTS: Among English-speaking parents, there tended to be higher proportions that could name a risk, benefit, or alternative. Our regression models showed overall that the parents with more than a high school education tended to have nearly 5 times higher odds of being able to name a risk. CONCLUSIONS: A gap in communication may exist between physicians and patients (or parents of patients) during the consent-taking process, and this gap may be impacted by socio-demographic factors such as language and education level.


Subject(s)
Communication , Physician-Patient Relations , Professional-Family Relations , Adolescent , Adult , Child , Child, Preschool , Communication Barriers , Consent Forms , Emergency Service, Hospital , Female , Health Literacy/trends , Hispanic or Latino/statistics & numerical data , Humans , Infant , Infant, Newborn , Logistic Models , Male , Markov Chains , Mental Recall , Middle Aged , Odds Ratio , Parents , Prospective Studies , Socioeconomic Factors , Young Adult
7.
Ann Emerg Med ; 64(5): 537-46, 2014 Nov.
Article in English | MEDLINE | ID: mdl-24970245

ABSTRACT

STUDY OBJECTIVE: Acute HIV infection is a clinical diagnosis aided by technology. Detecting the highly infectious acute stage of HIV infection is critical to reducing transmission and improving long-term outcomes. The Maricopa Integrated Health System implemented nontargeted, opt-out HIV screening with a fourth-generation antigen/antibody combination HIV assay test in our adult emergency department (ED) at Maricopa Medical Center to assess the prevalence of both acute and chronic unrecognized HIV. METHODS: Eligible patients aged 18 to 64 years were tested for HIV if they did not opt out and had blood drawn as part of their ED care. Patients were not eligible if they had a known HIV or AIDS diagnosis, exhibited altered mental status, were a current resident of a long-term psychiatric or correctional facility, or prompted a trauma activation. Reactive test results were delivered by a physician with the assistance of a linkage-to-care specialist. Specimens with a reactive fourth-generation assay result underwent confirmatory testing. RESULTS: From July 11, 2011, through January 5, 2014, 27,952 HIV screenings were performed for 22,468 patients tested for HIV; 78 (0.28%) had new HIV diagnoses. Of those, 18 (23% of all new diagnoses) were acute HIV infections, and 22 patients (28%) had a CD4 count of less than 200 cells/mL, or an opportunistic infection. CONCLUSION: HIV testing with a fourth-generation antigen/antibody laboratory test producing rapid results is feasible in an ED. Unexpectedly, nearly one fourth of patients with undiagnosed HIV had acute infections, which would have been more difficult to detect with previous testing technology.


Subject(s)
AIDS Serodiagnosis/methods , Emergency Service, Hospital , HIV Infections/diagnosis , AIDS Serodiagnosis/statistics & numerical data , Acute Disease , Adolescent , Adult , Aged , Arizona/epidemiology , Emergency Service, Hospital/statistics & numerical data , Female , HIV Infections/epidemiology , Humans , Male , Middle Aged , Treatment Refusal , Young Adult
8.
JMIR Hum Factors ; 11: e53940, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38916941

ABSTRACT

BACKGROUND: In pandemic situations, digital contact tracing (DCT) can be an effective way to assess one's risk of infection and inform others in case of infection. DCT apps can support the information gathering and analysis processes of users aiming to trace contacts. However, users' use intention and use of DCT information may depend on the perceived benefits of contact tracing. While existing research has examined acceptance in DCT, automation-related user experience factors have been overlooked. OBJECTIVE: We pursued three goals: (1) to analyze how automation-related user experience (ie, perceived trustworthiness, traceability, and usefulness) relates to user behavior toward a DCT app, (2) to contextualize these effects with health behavior factors (ie, threat appraisal and moral obligation), and (3) to collect qualitative data on user demands for improved DCT communication. METHODS: Survey data were collected from 317 users of a nationwide-distributed DCT app during the COVID-19 pandemic after it had been in app stores for >1 year using a web-based convenience sample. We assessed automation-related user experience. In addition, we assessed threat appraisal and moral obligation regarding DCT use to estimate a partial least squares structural equation model predicting use intention. To provide practical steps to improve the user experience, we surveyed users' needs for improved communication of information via the app and analyzed their responses using thematic analysis. RESULTS: Data validity and perceived usefulness showed a significant correlation of r=0.38 (P<.001), goal congruity and perceived usefulness correlated at r=0.47 (P<.001), and result diagnosticity and perceived usefulness had a strong correlation of r=0.56 (P<.001). In addition, a correlation of r=0.35 (P<.001) was observed between Subjective Information Processing Awareness and perceived usefulness, suggesting that automation-related changes might influence the perceived utility of DCT. Finally, a moderate positive correlation of r=0.47 (P<.001) was found between perceived usefulness and use intention, highlighting the connection between user experience variables and use intention. Partial least squares structural equation modeling explained 55.6% of the variance in use intention, with the strongest direct predictor being perceived trustworthiness (ß=.54; P<.001) followed by moral obligation (ß=.22; P<.001). Based on the qualitative data, users mainly demanded more detailed information about contacts (eg, place and time of contact). They also wanted to share information (eg, whether they wore a mask) to improve the accuracy and diagnosticity of risk calculation. CONCLUSIONS: The perceived result diagnosticity of DCT apps is crucial for perceived trustworthiness and use intention. By designing for high diagnosticity for the user, DCT apps could improve their support in the action regulation of users, resulting in higher perceived trustworthiness and use in pandemic situations. In general, automation-related user experience has greater importance for use intention than general health behavior or experience.


Subject(s)
COVID-19 , Contact Tracing , Mobile Applications , Humans , Contact Tracing/methods , Mobile Applications/statistics & numerical data , Cross-Sectional Studies , COVID-19/epidemiology , Female , Male , Adult , Surveys and Questionnaires , Middle Aged
9.
JMIR Mhealth Uhealth ; 10(1): e27095, 2022 01 18.
Article in English | MEDLINE | ID: mdl-35040801

ABSTRACT

BACKGROUND: Mobile health (mHealth) care apps are a promising technology to monitor and control health individually and cost-effectively with a technology that is widely used, affordable, and ubiquitous in many people's lives. Download statistics show that lifestyle apps are widely used by young and healthy users to improve fitness, nutrition, and more. While this is an important aspect for the prevention of future chronic diseases, the burdened health care systems worldwide may directly profit from the use of therapy apps by those patients already in need of medical treatment and monitoring. OBJECTIVE: We aimed to compare the factors influencing the acceptance of lifestyle and therapy apps to better understand what drives and hinders the use of mHealth apps. METHODS: We applied the established unified theory of acceptance and use of technology 2 (UTAUT2) technology acceptance model to evaluate mHealth apps via an online questionnaire with 707 German participants. Moreover, trust and privacy concerns were added to the model and, in a between-subject study design, the influence of these predictors on behavioral intention to use apps was compared between lifestyle and therapy apps. RESULTS: The results show that the model only weakly predicted the intention to use mHealth apps (R2=0.019). Only hedonic motivation was a significant predictor of behavioral intentions regarding both app types, as determined by path coefficients of the model (lifestyle: 0.196, P=.004; therapy: 0.344, P<.001). Habit influenced the behavioral intention to use lifestyle apps (0.272, P<.001), while social influence (0.185, P<.001) and trust (0.273, P<.001) predicted the intention to use therapy apps. A further exploratory correlation analysis of the relationship between user factors on behavioral intention was calculated. Health app familiarity showed the strongest correlation to the intention to use (r=0.469, P<.001), stressing the importance of experience. Also, age (r=-0.15, P=.004), gender (r=-0.075, P=.048), education level (r=0.088, P=.02), app familiarity (r=0.142, P=.007), digital health literacy (r=0.215, P<.001), privacy disposition (r=-0.194, P>.001), and the propensity to trust apps (r=0.191, P>.001) correlated weakly with behavioral intention to use mHealth apps. CONCLUSIONS: The results indicate that, rather than by utilitarian factors like usefulness, mHealth app acceptance is influenced by emotional factors like hedonic motivation and partly by habit, social influence, and trust. Overall, the findings give evidence that for the health care context, new and extended acceptance models need to be developed with an integration of user diversity, especially individuals' prior experience with apps and mHealth.


Subject(s)
Mobile Applications , Telemedicine , Humans , Life Style , Motivation , Surveys and Questionnaires
10.
Lancet Reg Health Eur ; 13: 100294, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35005678

ABSTRACT

In the summer of 2021, European governments removed most NPIs after experiencing prolonged second and third waves of the COVID-19 pandemic. Most countries failed to achieve immunization rates high enough to avoid resurgence of the virus. Public health strategies for autumn and winter 2021 have ranged from countries aiming at low incidence by re-introducing NPIs to accepting high incidence levels. However, such high incidence strategies almost certainly lead to the very consequences that they seek to avoid: restrictions that harm people and economies. At high incidence, the important pandemic containment measure 'test-trace-isolate-support' becomes inefficient. At that point, the spread of SARS-CoV-2 and its numerous harmful consequences can likely only be controlled through restrictions. We argue that all European countries need to pursue a low incidence strategy in a coordinated manner. Such an endeavour can only be successful if it is built on open communication and trust.

11.
Lancet Reg Health Eur ; 8: 100185, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34345876

ABSTRACT

How will the coronavirus disease 2019 (COVID-19) pandemic develop in the coming months and years? Based on an expert survey, we examine key aspects that are likely to influence the COVID-19 pandemic in Europe. The challenges and developments will strongly depend on the progress of national and global vaccination programs, the emergence and spread of variants of concern (VOCs), and public responses to non-pharmaceutical interventions (NPIs). In the short term, many people remain unvaccinated, VOCs continue to emerge and spread, and mobility and population mixing are expected to increase. Therefore, lifting restrictions too much and too early risk another damaging wave. This challenge remains despite the reduced opportunities for transmission given vaccination progress and reduced indoor mixing in summer 2021. In autumn 2021, increased indoor activity might accelerate the spread again, whilst a necessary reintroduction of NPIs might be too slow. The incidence may strongly rise again, possibly filling intensive care units, if vaccination levels are not high enough. A moderate, adaptive level of NPIs will thus remain necessary. These epidemiological aspects combined with economic, social, and health-related consequences provide a more holistic perspective on the future of the COVID-19 pandemic.

12.
Front Artif Intell ; 3: 45, 2020.
Article in English | MEDLINE | ID: mdl-33733162

ABSTRACT

Today the majority of people uses online social networks not only to stay in contact with friends, but also to find information about relevant topics, or to spread information. While a lot of research has been conducted into opinion formation, only little is known about which factors influence whether a user of online social networks disseminates information or not. To answer this question, we created an agent-based model and simulated message spreading in social networks using a latent-process model. In our model, we varied four different content types, six different network types, and we varied between a model that includes a personality model for its agents and one that did not. We found that the network type has only a weak influence on the distribution of content, whereas the message type has a clear influence on how many users receive a message. Using a personality model helped achieved more realistic outcomes.

13.
Nonlinear Dynamics Psychol Life Sci ; 13(4): 369-92, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19781136

ABSTRACT

The present study tested for 1/f noise to examine how timing and target constraints affect cognitive processes in aiming. Participants pointed to targets of varied height and width at preferred speed (Experiment 1) and as quickly as possible (Experiment 2). Results show greater intensity of 1/f noise, or long-range correlation in variability, at preferred speed and with increased accuracy demands perpendicular to the target (i.e., related to height). Prior research suggests that increased 1/f noise in movement reflects increased coordination of processes at different timescales (e.g., planning and control), particularly when there is more time to complete the movement. Previous studies also suggest that target height constraints promote more reliance on both predictive and reactive control, as more time is spent during initial aiming and adjustment. Thus, present results expand on what we know about aiming movements in two ways: (a) by further suggesting a non-orthogonal relation between planning (coarse aiming) and control (fine tuning) that is time dependent; and (b) by demonstrating that such an integration of processes, reflected in distinct patterns of 1/f noise, may be modulated by multiple environmental characteristics (i.e., target shape).


Subject(s)
Attention , Fractals , Kinesthesis , Nonlinear Dynamics , Orientation , Psychomotor Performance , Reaction Time , Acceleration , Adult , Algorithms , Biomechanical Phenomena , Female , Humans , Male , Signal Processing, Computer-Assisted
14.
Exp Brain Res ; 187(2): 303-19, 2008 May.
Article in English | MEDLINE | ID: mdl-18283444

ABSTRACT

The present study used 1/f noise to examine how spatial, physical, and timing constraints affect planning and control processes in aiming. Participants moved objects of different masses to different distances at preferred speed (Experiment 1) and as quickly as possible (Experiment 2). Power spectral density, standardized dispersion, rescaled range, and an autoregressive fractionally integrated moving average (ARFIMA) model selection procedure were used to quantify 1/f noise. Measures from all four analyses were in reasonable agreement, with more ARFIMA (long-range) models selected at peak velocity in Experiment 1 and fewer selected at peak velocity in Experiment 2. There also was a nonsignificant trend where, at preferred speed, of those participants who showed 1/f noise, more tended to show 1/f noise at peak velocity, when planning and control would overlap most. This trend disappeared for fast movements, where planning and control would have less time to overlap. Summing short-range processes at different timescales can produce 1/f-like noise. As planning is a slower-moving process and control faster, present results suggest that, with enough time for both planning and control, 1/f noise in aiming may arise from a similar summation of processes. Potential limitations of time series length in the present task are discussed.


Subject(s)
Movement/physiology , Psychomotor Performance/physiology , Adult , Analysis of Variance , Biomechanical Phenomena/physiology , Female , Humans , Male , Models, Neurological
15.
IEEE Trans Vis Comput Graph ; 24(1): 584-594, 2018 01.
Article in English | MEDLINE | ID: mdl-28866525

ABSTRACT

We investigate priming and anchoring effects on perceptual tasks in visualization. Priming or anchoring effects depict the phenomena that a stimulus might influence subsequent human judgments on a perceptual level, or on a cognitive level by providing a frame of reference. Using visual class separability in scatterplots as an example task, we performed a set of five studies to investigate the potential existence of priming and anchoring effects. Our findings show that-under certain circumstances-such effects indeed exist. In other words, humans judge class separability of the same scatterplot differently depending on the scatterplot(s) they have seen before. These findings inform future work on better understanding and more accurately modeling human perception of visual patterns.


Subject(s)
Bias , Psychological Tests , Repetition Priming/physiology , Visual Perception/physiology , Crowdsourcing , Databases, Factual , Humans , Models, Psychological , Research Design
16.
Wounds ; 30(8): 229-234, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30212372

ABSTRACT

BACKGROUND: Compliance with turning protocols in the intensive care unit (ICU) is low; however, little is known about the quality of turning, such as turn angle magnitude or depressurization time. Wearable sensors are now available that provide insight into care practices. OBJECTIVE: This secondary descriptive study describes the turning practices of nurses from 2 ICUs at an academic medical center among consecutive ICU patients. MATERIALS AND METHODS: A wearable patient sensor was applied to patients on hospital admission. The sensor continuously recorded position data but was not visible to staff. A qualified turn was one that reached > 20° angle and was held for 1 minute after turning. The institution's clinical research repository provided clinical data. RESULTS: A total of 555 patients were analyzed over a 5-month period (September 2015-January 2016); 44 870 hours of monitoring data (x- = 73 hours ± 97/patient) and 27 566 individual turns were recorded. Compliant time was recorded as 54%, with 39% of observed turns reaching the minimum angle threshold and 38% of patients remaining in place for > 15 minutes (depressurization). Turn magnitude was similar for medical and surgical patients. Factors associated with lower compliant time included male sex, high body mass index, and low Braden score. Patients were supine for 72% of the observed time. CONCLUSIONS: The investigators found dynamically measured turning frequency, turn magnitude, and tissue depressurization time to be suboptimal. This study highlights the need to reinforce best practices related to preventive turning and to consider staff and patient factors when developing individualized turn protocols.


Subject(s)
Guideline Adherence/statistics & numerical data , Iatrogenic Disease/prevention & control , Intensive Care Units , Patient Positioning/standards , Practice Patterns, Nurses'/statistics & numerical data , Pressure Ulcer/prevention & control , Wearable Electronic Devices/statistics & numerical data , Adolescent , Adult , Aged , Body Mass Index , Female , Humans , Intention to Treat Analysis , Male , Middle Aged , Outcome Assessment, Health Care , Patient Positioning/instrumentation , Quality Improvement , Sex Distribution , Time Factors , Young Adult
17.
IEEE Trans Vis Comput Graph ; 24(4): 1623-1632, 2018 04.
Article in English | MEDLINE | ID: mdl-29543179

ABSTRACT

In virtual environments, the space that can be explored by real walking is limited by the size of the tracked area. To enable unimpeded walking through large virtual spaces in small real-world surroundings, redirection techniques are used. These unnoticeably manipulate the user's virtual walking trajectory. It is important to know how strongly such techniques can be applied without the user noticing the manipulation-or getting cybersick. Previously, this was estimated by measuring a detection threshold (DT) in highly-controlled psychophysical studies, which experimentally isolate the effect but do not aim for perceived immersion in the context of VR applications. While these studies suggest that only relatively low degrees of manipulation are tolerable, we claim that, besides establishing detection thresholds, it is important to know when the user's immersion breaks. We hypothesize that the degree of unnoticed manipulation is significantly different from the detection threshold when the user is immersed in a task. We conducted three studies: a) to devise an experimental paradigm to measure the threshold of limited immersion (TLI), b) to measure the TLI for slowly decreasing and increasing rotation gains, and c) to establish a baseline of cybersickness for our experimental setup. For rotation gains greater than 1.0, we found that immersion breaks quite late after the gain is detectable. However, for gains lesser than 1.0, some users reported a break of immersion even before established detection thresholds were reached. Apparently, the developed metric measures an additional quality of user experience. This article contributes to the development of effective spatial compression methods by utilizing the break of immersion as a benchmark for redirection techniques.


Subject(s)
Motion Sickness/prevention & control , Virtual Reality , Walking/physiology , Adolescent , Adult , Computer Graphics , Female , Humans , Male , Rotation , Young Adult
18.
Int J Nurs Stud ; 80: 12-19, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29331656

ABSTRACT

IMPORTANCE: Though theoretically sound, studies have failed to demonstrate the benefit of routine repositioning of at-risk patients for the prevention of hospital acquired pressure injuries. OBJECTIVE: To assess the clinical effectiveness of a wearable patient sensor to improve care delivery and patient outcomes by increasing the total time with turning compliance and preventing pressure injuries in acutely ill patients. DESIGN: Pragmatic, investigator initiated, open label, single site, randomized clinical trial. SETTING: Two Intensive Care Units in a large Academic Medical Center in California. PARTICIPANTS: Consecutive adult patients admitted to one of two Intensive Care Units between September 2015 to January 2016 were included (n = 1564). Of the eligible patients, 1312 underwent randomization. INTERVENTION: Patients received either turning care relying on traditional turn reminders and standard practices (control group, n = 653), or optimal turning practices, influenced by real-time data derived from a wearable patient sensor (treatment group, n = 659). MAIN OUTCOME(S) AND MEASURE(S): The primary and secondary outcomes of interest were occurrence of hospital acquired pressure injury and turning compliance. Sensitivity analysis was performed to compare intention-to-treat and per-protocol effects. RESULTS: The mean age was 60 years (SD, 17 years); 55% were male. We analyzed 103,000 h of monitoring data. Overall the intervention group had significantly fewer Hospital Acquired Pressure Injuries during Intensive Care Unit admission than the control group (5 patients [0.7%] vs. 15 patients [2.3%] (OR = 0.33, 95%CI [0.12, 0.90], p = 0.031). The total time with turning compliance was significantly different in the intervention group vs. control group (67% vs 54%; difference 0.11, 95%CI [0.08, 0.13], p < 0.001). Turning magnitude (21°, p = 0.923) and adequate depressurization time (39%, p = 0.145) were not statistically different between groups. CONCLUSIONS AND RELEVANCE: Among acutely ill adult patients requiring Intensive Care Unit admission, the provision of optimal turning was greater with a wearable patient sensor, increasing the total time with turning compliance and demonstrated a statistically significant protective effect against the development of hospital acquired pressure injuries. These are the first quantitative data on turn quality in the Intensive Care Unit and highlight the need to reinforce optimal turning practices. Additional clinical trials leveraging technologies like wearable sensors are needed to establish the appropriate frequency and dosing of individualized turning protocols to prevent pressure injuries in at-risk hospitalized patients.


Subject(s)
Intensive Care Units/organization & administration , Patient Positioning/standards , Wearable Electronic Devices , Academic Medical Centers/organization & administration , Acute Disease , Adult , Aged , California , Female , Guideline Adherence , Humans , Intention to Treat Analysis , Male , Middle Aged , Outcome Assessment, Health Care , Pressure Ulcer/prevention & control , Risk Factors
19.
Soc Work ; 60(3): 238-46, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26173365

ABSTRACT

Social workers have played an integral role in society's response to the HIV/AIDS pandemic since the discovery of the disease. As the landscape of the epidemic has changed, so has the social work response to it. Social workers are, and have been, central to the success of TESTAZ (Test, Educate, Support, and Treat Arizona), which is a nontargeted, routine opt-out HIV screening program in the emergency department (ED) of Maricopa Medical Center. This article focuses on the crucial role social workers play in every stage of program development, implementation, and patient movement through the stages of the HIV care continuum. Social worker involvement with HIV-positive patients diagnosed in the ED is imperative to achieving patient viral suppression.


Subject(s)
Continuity of Patient Care , Emergency Service, Hospital , HIV Infections/therapy , Professional Role , Social Work , Adolescent , Adult , Aged , Arizona , Female , HIV Infections/diagnosis , Humans , Male , Middle Aged , Pregnancy , Young Adult
20.
J Neural Eng ; 10(1): 016001, 2013 Feb.
Article in English | MEDLINE | ID: mdl-23220865

ABSTRACT

OBJECTIVE: Clinicians often use depth-electrode recordings to localize human epileptogenic foci. To advance the diagnostic value of these recordings, we applied logistic regression models to single-neuron recordings from depth-electrode microwires to predict seizure onset zones (SOZs). APPROACH: We collected data from 17 epilepsy patients at the Barrow Neurological Institute and developed logistic regression models to calculate the odds of observing SOZs in the hippocampus, amygdala and ventromedial prefrontal cortex, based on statistics such as the burst interspike interval (ISI). MAIN RESULTS: Analysis of these models showed that, for a single-unit increase in burst ISI ratio, the left hippocampus was approximately 12 times more likely to contain a SOZ; and the right amygdala, 14.5 times more likely. Our models were most accurate for the hippocampus bilaterally (at 85% average sensitivity), and performance was comparable with current diagnostics such as electroencephalography. SIGNIFICANCE: Logistic regression models can be combined with single-neuron recording to predict likely SOZs in epilepsy patients being evaluated for resective surgery, providing an automated source of clinically useful information.


Subject(s)
Action Potentials/physiology , Electrodes, Implanted , Electroencephalography/methods , Epilepsy/diagnosis , Models, Neurological , Neurons/physiology , Adult , Electrodes, Implanted/statistics & numerical data , Electroencephalography/instrumentation , Electroencephalography/statistics & numerical data , Epilepsy/physiopathology , Female , Humans , Male , Middle Aged , Neurons/pathology , Predictive Value of Tests , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL