Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
2.
Pancreatology ; 16(4): 621-4, 2016.
Article in English | MEDLINE | ID: mdl-26968257

ABSTRACT

BACKGROUND/OBJECTIVES: Angiogenesis plays a central role in tumor growth and metastasis and tyrosine kinases are crucial in the modulation of growth factor signaling. Several side effects of tyrosine kinase inhibitors have been reported, including diarrhea due to pancreatic insufficiency. The suspected mechanism is the anti-angiogenetic effect of the inhibited vascular endothelial growth factor (VEGF) causing a disturbance of the microvasculation. The aim of the present study was to determine the volume of the pancreas before and after a therapy both with the multi-tyrosine kinase inhibitor Sorafenib and Bevacizumab, which is a humanized monoclonal immunoglobulin G1 antibody against VEGF. METHODS: Retrospective monocentric study including 42 patients who received either Sorafenib, Bevacizumab combined with Flourouracil and/or Irinotecan, or singly Flourouracil and Irinotecan for different non-pancreatic malignancies. The volume of the pancreas was measured before and after therapy by CT-scan based volumetry. RESULTS: The pancreatic volume was statistically significantly lower after treatment with Sorafenib (75.4 mL vs. 71.0 mL; p = 0.006) or Bevacizumab and Fluorouracil ± Irinotecan (71.8 mL vs. 62.6 mL; p = 0.020). The pancreatic volume did not change statistically significantly after treatment with Fluorouracil ± Irinotecan only (51.1 mL vs. 49.9 mL; p = 0.142). CONCLUSIONS: Pancreatic volume decreases statistically significantly under treatment with both the multi-tyrosine kinase inhibitor Sorafenib and the angiogenesis inhibitor Bevacizumab. This volume reduction is most likely due to a reduced microvasculation by inhibition of VEGF.


Subject(s)
Angiogenesis Inhibitors/adverse effects , Bevacizumab/adverse effects , Niacinamide/analogs & derivatives , Pancreas/diagnostic imaging , Phenylurea Compounds/adverse effects , Adult , Aged , Angiogenesis Inhibitors/therapeutic use , Antimetabolites, Antineoplastic/therapeutic use , Antineoplastic Agents, Phytogenic/therapeutic use , Bevacizumab/therapeutic use , Camptothecin/analogs & derivatives , Camptothecin/therapeutic use , Female , Fluorouracil/therapeutic use , Humans , Irinotecan , Male , Middle Aged , Neoplasms/drug therapy , Niacinamide/adverse effects , Niacinamide/therapeutic use , Phenylurea Compounds/therapeutic use , Retrospective Studies , Sorafenib , Tomography, X-Ray Computed , Vascular Endothelial Growth Factor A/antagonists & inhibitors
3.
Eur Arch Otorhinolaryngol ; 272(10): 2737-40, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25193549

ABSTRACT

Ocular vestibular evoked myogenic potentials (oVEMPs) represent extraocular muscle activity in response to vestibular stimulation. oVEMP amplitudes are known to increase with increasing upward gaze angle, while the patient fixates a visual target. We investigated two different methods of presenting a visual target during oVEMP recordings. 57 healthy subjects were enrolled in this study. oVEMPs were elicited by 500 Hz air-conducted tone bursts while the subjects were looking upward at a marking which was either fixed on the wall or originated from a head-mounted laser attached to a headband, in either case corresponding to a 35° upward gaze angle. oVEMP amplitudes and latencies did not differ between the subjects looking at the fixed marking and the ones looking at the laser marking. The intra-individual standard deviation of amplitudes obtained by two separate measurements for each subject, however, as a measure of test-retest reliability, was significantly smaller for the laser headband group (0.60) in comparison to the group looking at the fixed marking (0.96; p = 0.007). The intraclass correlation coefficient revealed better test-retest reliability for oVEMP amplitudes when using the laser headband (0.957) than using the fixed marking (0.908). Hence, the use of a visual target originating from a headband enhances the reproducibility of oVEMPs. This might be due to the fact that the laser headband ensures a constant gaze angle and rules out the influence of small involuntary head movements on the gaze angle.


Subject(s)
Electromyography/methods , Head Movements/physiology , Lasers , Oculomotor Muscles/physiology , Vestibular Evoked Myogenic Potentials/physiology , Adult , Female , Head , Humans , Male , Reproducibility of Results
4.
Front Hum Neurosci ; 9: 680, 2015.
Article in English | MEDLINE | ID: mdl-26733851

ABSTRACT

BACKGROUND: People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. METHODS: A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. RESULTS: Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. CONCLUSIONS: In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency.

5.
Ann N Y Acad Sci ; 1164: 116-8, 2009 May.
Article in English | MEDLINE | ID: mdl-19645888

ABSTRACT

Human head rotation in roll around an earth-horizontal axis constitutes a vestibular stimulus that, by its rotational component, acts on the semicircular canals (SCC) and that, by its tilt of the gravity vector, also acts on the otoliths. Galvanic vestibular stimulation (GVS) is thought to resemble mainly a rotation in roll. A superposition of sinusoidal GVS with a natural earth-horizontal roll movement was therefore applied in order to cancel the rotation effects and to isolate the otolith activation. By self-adjusting the amplitude and phase of GVS, subjects were able to minimize their sensation of rotation and to generate the perception of a linear translation. The final adjustments are in the range of a model that predicts SCC activation during natural rotations and GVS. This indicates that the tilt-translation ambiguity of the otoliths is resolved by SCC-otolith interaction. It is concluded that GVS might be able to cancel rotations in roll and that the residual tilt of the gravitoinertial force is possibly interpreted as a linear translation.


Subject(s)
Earth, Planet , Head Movements , Vestibule, Labyrinth/physiology , Humans
6.
Ann N Y Acad Sci ; 1164: 188-93, 2009 May.
Article in English | MEDLINE | ID: mdl-19645898

ABSTRACT

Humans adjust gaze by eye, head, and body movements. Certain stimulus properties are therefore elevated at the gaze center, but the relative contribution of eye-in-head and head-in-world movements to this selection process is unknown. Gaze- and head-centered videos recorded with a wearable device (EyeSeeCam) during free exploration are reanalyzed with respect to responses of a face-detection algorithm. In line with results on low-level features, it was found that face detections are centered near the center of gaze. By comparing environments with few and many true faces, it was inferred that actual faces are centered by eye and head movements, whereas spurious face detections ("hallucinated faces") are primarily centered by head movements alone. This analysis suggests distinct contributions to gaze allocation: head-in-world movements induce a coarse bias in the distribution of features, which eye-in-head movements refine.


Subject(s)
Eye Movements , Head Movements , Adult , Female , Humans , Male
7.
Ann N Y Acad Sci ; 1164: 331-3, 2009 May.
Article in English | MEDLINE | ID: mdl-19645921

ABSTRACT

Head impulses are a routine clinical test of semicircular canal function. At the bedside, they are used to detect malfunctioning of the horizontal semicircular canals. So far, 3-D-search-coil recording is required to reliably test anterior and posterior canal function and to determine the gain of the vestibulo-ocular reflex (VOR). Search-coil recording cannot be done at the bedside. Here we tested whether video-oculography (VOG) is suitable to assess VOR gain for individual canals at the bedside. We recorded head impulses in healthy subjects using a mobile high-frame-rate, head-mounted VOG-device and compared the results with those obtained with standard search-coil recording. Our preliminary results indicate that high-frame-rate VOG is a promising tool to measure and quantify individual semicircular canal function not only at the bedside.


Subject(s)
Reflex, Vestibulo-Ocular , Head Movements , Humans , Semicircular Canals/physiology
8.
Ann N Y Acad Sci ; 1164: 353-66, 2009 May.
Article in English | MEDLINE | ID: mdl-19645927

ABSTRACT

Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.


Subject(s)
Eye Movements , Head Movements , Animals , Cats , Humans
9.
Ann N Y Acad Sci ; 1164: 376-9, 2009 May.
Article in English | MEDLINE | ID: mdl-19645930

ABSTRACT

Humans can discriminate whether a change in the direction of gravito-inertial force (GIF) is caused by body tilt or by linear translation. This ability, attributed to vestibular sensory fusion, is often examined by asking subjects to adjust an indicator to match their subjective earth-fixed vertical (SV). We used two different modalities, visual and haptic, to examine continuous adjustment during different combinations of roll rotation and linear translation on a hexapod motion device. We conclude that, in conditions of combined translational and rotational motion, the modality of indication plays a major role for the perception of verticality of the indicator.


Subject(s)
Movement , Analysis of Variance , Humans , Proprioception , Vision, Ocular
10.
Ann N Y Acad Sci ; 1164: 461-7, 2009 May.
Article in English | MEDLINE | ID: mdl-19645949

ABSTRACT

The prototype of a gaze-controlled, head-mounted camera (EyeSeeCam) was developed that provides the functionality for fundamental studies on human gaze behavior even under dynamic conditions like locomotion. EyeSeeCam incorporates active visual exploration by saccades with image stabilization during head, object, and surround motion just as occurs in human ocular motor control. This prototype is a first attempt to combine free user mobility with image stabilization and unrestricted exploration of the visual surround in a man-made technical vision system. The gaze-driven camera is supplemented by an additional wide-angle, head-fixed scene camera. In this scene view, the focused gaze view is embedded with picture-in-picture functionality, which provides an approximation of the foveated retinal content. Such a combined video clip can be viewed more comfortably than the saccade-pervaded image of the gaze camera alone. EyeSeeCam consists of a video-oculography (VOG) device and a camera motion device. The benchmark for the evaluation of such a device is the vestibulo-ocular reflex (VOR), which requires a latency on the order of 10 msec between head and eye (camera) movements for proper image stabilization. A new lightweight VOG was developed that is able to synchronously measure binocular eye positions at up to 600 Hz. The camera motion device consists of a parallel kinematics setup with a backlash-free gimbal joint that is driven by piezo actuators with no reduction gears. As a result, the latency between the rotations of an artificial eye and the camera was 10 msec, which is VOR-like.


Subject(s)
Eye Movements , Photography/instrumentation , Biomechanical Phenomena , Humans , Oculomotor Muscles/physiology , Reaction Time
11.
Stud Health Technol Inform ; 142: 413-6, 2009.
Article in English | MEDLINE | ID: mdl-19377196

ABSTRACT

Medical treatments of a surgeon or a dentist are sometimes documented for teaching, telemedicine, or liability issues using a scene oriented video camera. But the most interesting parts of the scene are often covered by the operators hand or body. The best view to the scene is next to the operators field of view or perfectly: Within his head. Head-mounted scene cameras are used to create this exclusive point of view. Eye tracking systems could be used to emphasize the point of gaze within the scene image. The presented system improves classical eye trackers with an additional gaze-driven camera. The resulting scene image maintains the overall context, while the image from the gaze driven camera acts like a magnifying glass and provides a high-resolution image of the gazed detail using an independent exposure, thus creating a high dynamic range image. We show an application in a real dental treatment scenario.


Subject(s)
Documentation/methods , Eye Movements , Video Recording/instrumentation , Education, Dental , Humans , Teaching , Video Recording/methods
12.
J Vis ; 8(14): 12.1-17, 2008 Oct 23.
Article in English | MEDLINE | ID: mdl-19146313

ABSTRACT

During free exploration, humans adjust their gaze by combining body, head, and eye movements. Laboratory experiments on the stimulus features driving gaze, however, typically focus on eye-in-head movements, use potentially biased stimuli, and restrict the field of view. Our novel wearable eye-tracking system (EyeSeeCam) overcomes these limitations. We recorded gaze- and head-centered videos of the visual input of observers freely exploring real-world environments (4 indoor, 8 outdoor), yielding approximately 10 h of data. Global power spectra reveal little difference between head- and gaze-centered recordings. Local stimulus features exhibit spatial biases in head-centered coordinates, which are environment-dependent, but consistent across observers. Eye-in-head movements center these biases in gaze-centered coordinates, leading to elevated "salient" features at center of gaze. This shows that central biases in image feature distributions in "natural" photographs are not a property of environments, but of stimuli already gaze-centered by the photographer. Further central biases in laboratory subjects' fixation distributions do not result from re-centering of the eyes but are an artifact of display restrictions. Hence, our findings demonstrate that the concept of feature "saliency" transfers from the laboratory to free exploration, but also highlight the importance of experiments with freely moving eyes, head, and body.


Subject(s)
Environment , Eye Movements/physiology , Fixation, Ocular/physiology , Head Movements/physiology , Vision, Ocular/physiology , Adult , Artifacts , Equipment Design , Female , Humans , Male , Video Recording/instrumentation
13.
Network ; 18(3): 267-97, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17926195

ABSTRACT

During natural behavior humans continuously adjust their gaze by moving head and eyes, yielding rich dynamics of the retinal input. Sensory coding models, however, typically assume visual input as smooth or a sequence of static images interleaved by volitional gaze shifts. Are these assumptions valid during free exploration behavior in natural environments? We used an innovative technique to simultaneously record gaze and head movements in humans, who freely explored various environments (forest, train station, apartment). Most movements occur along the cardinal axes, and the predominance of vertical or horizontal movements depends on the environment. Eye and head movements co-occur more frequently than their individual statistics predicts under an independence assumption. The majority of co-occurring movements point in opposite directions, consistent with a gaze-stabilizing role of eye movements. Nevertheless, a substantial fraction of eye movements point in the same direction as co-occurring head movements. Even under the very most conservative assumptions, saccadic eye movements alone cannot account for these synergistic movements. Hence nonsaccadic eye movements that interact synergistically with head movements to adjust gaze cannot be neglected in natural visual input. Natural retinal input is continuously dynamic, and cannot be faithfully modeled as a mere sequence of static frames with interleaved large saccades.


Subject(s)
Environment , Eye Movements/physiology , Head Movements/physiology , Ocular Physiological Phenomena , Psychomotor Performance/physiology , Visual Perception/physiology , Humans , Models, Neurological , Orientation/physiology , Reaction Time/physiology
14.
Stud Health Technol Inform ; 119: 486-90, 2006.
Article in English | MEDLINE | ID: mdl-16404105

ABSTRACT

A first proof of concept was developed for a head-mounted video camera system that is continuously aligned with the user's orientation of gaze. In doing so, it records images from the user's perspective that can document manual tasks during, e.g., surgery. Eye movements are tracked by video-oculography and used as signals to drive servo motors that rotate the camera. Thus, the sensorimotor output of a biological system for the control of eye movements evolved over millions of years is used to move an artificial eye. All the capabilities of multi-sensory processing for eye, head, and surround motions are detected by the vestibular, visual, and somatosensory systems and used to drive a technical camera system. A camera guided in this way mimics the natural exploration of a visual scene and acquires video sequences from the perspective of a mobile user, while the oculomotor reflexes naturally stabilize the camera on target during head and target movements. Various documentation and teaching applications in health care, industry, and research are conceivable.


Subject(s)
Documentation , Surgical Procedures, Operative/education , Videotape Recording/instrumentation , Vision, Ocular , Germany , Head , Humans
15.
Ann N Y Acad Sci ; 1039: 455-8, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15826998

ABSTRACT

The torsional vestibulo-ocular reflex (tVOR), which is mainly driven by input from the semicircular canals, has been shown to be augmented by otolith input: its gain is increased in the upright as compared to the supine head orientation. We tested whether otolith input in right/left ear down and upside-down orientation also contributes to the tVOR. In contrast to current models of canal-otolith interaction, which predict equal responses in upright and upside-down orientations, we found that the tVOR in the upside-down position is significantly decreased, possibly due to an additive interaction of canal and otolith signals.


Subject(s)
Gravitation , Reflex, Vestibulo-Ocular/physiology , Humans , Movement , Posture , Reference Values , Rotation , Supine Position , Torsion Abnormality
SELECTION OF CITATIONS
SEARCH DETAIL
...