PropertyValue
is nif:broaderContext of
nif:broaderContext
is schema:hasPart of
schema:isPartOf
nif:isString
  • The protocol of this study was approved by the Ethics Committee in Institute of Cognitive Neuroscience and Learning, Beijing Normal University. Subjects were informed with written consent. This study was compatible with Code of Ethical Principles for Medical Research Involving Human Subjects of the World Medical Association (Declaration of Helsinki). Twenty-two healthy volunteers (postgraduate students) were recruited (12 female and 10 male, 20–30 years old) from the Kunming Institute of Zoology, CAS. Subjects had normal or corrected-to-normal vision. All subjects were paid Y20. All Subjects were initially naive to experiment, but received training to familiarize them with the task procedure before beginning the experimental task. No subjects had any history of mental disorder. An EyeLink 2000 Desktop eye tracking system (SR Research Ltd., Ontario, Canada) was used to present stimuli and record eye movements. Monocular eye position data was sampled at 2000 Hz. Stimuli were displayed on a 19 inch LCD monitor (DELL E198FPf, 37.5×30.5 cm, resolution of 1024×768 pixels, refresh rate of 60 Hz). The eye-to-screen distance and the eye-to-camera distance were 70 cm and 50 cm, respectively. Thus, to the subjects, the screen occupied approximately 30°×25° of visual angle, horizontally and vertically, respectively. Subjects' heads were immobilized by a chin-rest. Saccades were detected by three thresholds: a velocity threshold of 30 o/s, an initial acceleration threshold of 8000 o/s2 and a displacement threshold of 0.15°. Fixation was defined as the time between two saccades. Because gender differences were observed in responses to visual emotional stimuli [71], [72], two equivalent affective picture packages for male and female subjects were selected from the International Affective Picture System (IAPS) [26]. Valence was rated on a scale from 1 (most unpleasant) to 9 (most pleasant); arousal was rated in scale from 1 (sleepy, not at all arousing) to 9 (most exciting). Each subject had to finish two blocks (60 pictures): the valences block (AB, 30 pictures) with 10 pleasant (HV), 10 neutral (MV) and 10 unpleasant (LV) pictures categorized by valence score; and the arousal block (30 pictures): 10 exciting (HA), 10 calm (MA) and 10 sleepy (LA) pictures categorized by arousal score. Pictures were complex, color scenes involving animals, people, blood, mutilation, nature scenes, etc. Arousal ratings for valence block pictures were at the medium level (M = 5.1, SD = 0.29), and valence ratings for arousal block pictures were at the neutral level (M = 5.21, SD = 0.69). The Pictures' IAPS series numbers are listed in Table S3. Low-level Image Properties & Familiarity: The feature-based factors of picture complexity and luminance showed no significant differences in the valence block (p = 0.24 and p = 0.18, respectively) or the arousal block (p = 0.67 and p = 0.81, respectively), Kruskal-Wallis test. Picture complexity was measured as the compressed image file size in kB [26]. Larger file sizes indicated more complex images. Picture luminance was calculated with Adobe Photoshop CS2 (Adobe Systems Inc., USA) in 0–255 gray scale. The spatial frequencies of images in the two blocks showed no marked differences (Text S1). Familiarity was rated on a 7-point scale (1 = least familiar, 7 = most familiar) by subjects during the task. Differences in the familiarity ratings were significant within the valence block (p<0.01) and as well as within the arousal block (p<0.01), but were not significant between the two blocks (Kruskal-Wallis tests). Participants sat in a quiet dark house, with their head placed on chin-rest in front of the stimulus presentation screen. Before picture display began, the ‘9-point calibration’ program of the eye-tracking system was run to ensure that the EyeLink camera could capture the subject's pupil. Each subject had to finish two blocks (60 trials), beginning with the valence block (VB, 30 trails) and followed by the arousal block (AB, 30 trails). Subjects were allowed a rest interval between blocks, the duration of which was left to their discretion. At the onset of a trail, a Gaussian-noise image was displayed on screen for two seconds, during this period the subject was asked to fixate on a black center cross. Then a target affective picture (randomly presented without repetition) was displayed for five seconds. Subjects were asked to view the picture freely and the left eye was monitored during this period. Next, a patch image (any part of the affective picture, 250×200 pixels, selected randomly from 30 patches) was presented in the screen center [73]. Subjects were asked to press the key ‘0’ if they thought this patch was part of the target picture, or press the key ‘1’ otherwise. Subjects were instructed to respond as quickly as possible. Audio feedback was given (a ‘doo’ sound for incorrect responses and a ‘dee’ sound for correct responses). This patch-task encouraged subjects to freely view the picture because the patch image was randomly clipped out of the target picture and it was too small to be easily recognized if subject had just fixated at one place. The familiarity rating task followed this; subjects were asked to hit a number key as soon as possible to rate the target picture on a 7 point scale of familiarity (1 =  totally unacquainted, 7 =  completely familiar). A ‘9-point calibration’ program was performed every ten trials in each block. The duration of the entire experiment was approximately 40 minutes. Path graph modeling and computation of topological metrics were performed on the MATLAB (Mathworks Inc., MA, USA) software platform. Curve fitting of spectral embedding path graphs employed the Curve Fitting Toolbox in MATLAB. Scan paths with fewer than five fixations could not be fitted by our model, hence these data were discarded. Statistical comparisons employed Welch's t-test, ANOVA and Tukey's post hoc test. Rating scores were analyzed with nonparametric Kruskal-Wallis tests. Trend analysis on graded affective effects was performed by F-test.
rdf:type