nif:isString
|
-
This study was under the research project “A study of interactions between cognition, emotion and physiology (Protocol No: 100-014-E),” which was approved by the Institution Review Board (IRB) of the National Taiwan University Hospital Hsinchu Branch. Written Informed consents were obtained from all subjects before the experiment.
Fifty-two subjects ranging in age between 20 and 26 (M = 21.3, SD = 1.2; 44 men, 8 women) performed keyboard typing tasks right after presented with emotional stimuli. The subjects were college students selected from a university in Taiwan, with normal hearing in regard to relative sensitivity at different frequencies. All the subjects self-reported that they were nonsmoker, healthy, with no history of brain injury and cardiovascular problems. The subjects also reported that they had normal or corrected-to-normal vision and normal range of finger movement. They are all right-handed.
A subject wore earphones during the experiment and was instructed to type-in a target typing text "748596132" once immediately after hearing each of the International Affective Digitized Sounds 2nd edition (IADS-2) [42] sounds, for 63 trials. The experiment was conducted based on a simple dimensional view of emotion, which assumes that emotion can be defined by a coincidence of values on two different strategic dimensions that are, valence and arousal. To assess these two dimensions of the affective space, the Self-Assessment Manikin (SAM), an affective rating system devised by Lang [43] was used to acquire the affective ratings. Each trial began with an instruction (“Please type-in the target typing text after listening to the next sound”) presented for 5 s. Then, the sound stimulus was presented for 6 s. After the sound terminated, the SAM with a rating instruction (“Please rate your feeling on both the two dimensions after typing the target typing text ‘748596132’”) was presented. The subject first typed-in the target typing text once, and then made his/her ratings of valence and arousal. A standard 15 s rating period was used, which allows ample time for the subject to make the SAM ratings. A computer program controlled the presentation and timing of the instructions and sounds. The keystroke data was recorded during the typing task. In addition to the 63 trials, 3 practice trials and a training section were applied prior to the experiment. Three sounds (birds, female sigh, and baby cry) provided the subject with a rough range of the types of the contents that were presented. After these practice trials was the training section, in which the subject continually typed-in the target typing text (presented on the screen by blue text and gray background) using the number pad (shown in Fig 1(a)) that is located on the right side of a standard keyboard, for 40 s.
Figure data removed from full text. Figure identifier and caption: 10.1371/journal.pone.0129056.g001 The number pad in the keyboard used in our experiment, with an illustration of the design concept of our designed target number typing sequence.The arrow shows the order of changes of the typing target. For those (x, y) pairs in the heptagons, x represents the order of a typing target and y represents the desirable finger (i.e. thumb (f1), index finger (f2), middle finger (f3), ring finger (f4), and little finger (f5) or pinky) that was used for typing the corresponding typing target.
A number sequence was used as the target typing text instead of an alphabet sequence or symbols to avoid possible interference caused by linguistic context to the subject’s emotional states. In all the various number sequences used in our pilot experiments [38, 44], we found the existence of the difference in keystroke typing between the subjects in different emotional states. However, we also found that the relationship between the keystroke typing and emotional states may be different due to different keys that are typed and also the order of typing. A comparison of keystroke typing between emotional states using different number sequences may reduce the power of statistical tests (given a same number of trials). Hence, to conduct a more conservative comparison across emotion and to enhance the generalizability of this study, we decided to use a single number sequence that is designed to be general. We designed the target typing text “748596132” to 1) be easy to type without requiring the subjects to perform abrupt changes in their posture, 2) have the number of digits fairly distributed on a number pad, and 3) encourage all the subjects to maintain a same posture (i.e., in terms of finger usage) when typing the given sequence [38] (see Fig 1(b) for more detail). The time length of the experiment was designed to be as short as possible to avoid the subjects from being tired of typing on the keyboard. Note that all the subjects indeed reported that they were not fatigued after the experiment.
The stimuli we used were 63 sounds selected from the IADS-2 database, which is developed and distributed by the NIMH Center for Emotion and Attention (CSEA) at the University of Florida [42]. The IADS-2 is developed to provide a set of normative emotional stimuli for experimental investigations of emotion and attention and can be easily obtained through e-mail application. The IADS-2 database contains various affective sounds proved to be capable of inducing diverse emotions in the affective space [45]. The sounds we used as the stimuli were selected from IADS-2 database complying the IADS-2 sound set selection protocol described in [42]. The protocol includes the constraint about the number of sounds used in a single experiment, and the distribution of the emotions that are expected to be induced by the selected sounds. Two different stimulus orders were used to balance the position of a particular stimulus within the series across the subjects. The physical properties of these sounds were also controlled to prevent clipping, and to control for loudness [42]. The SAM is a non-verbal pictorial assessment designed to assess the emotional dimensions (i.e. valence and arousal) directly by means of two sets of graphical manikins. The SAM has been extensively tested in conjunction with the IADS-2 and has been used in diverse theoretical studies and applications [46–48]. The SAM takes a very short time to complete (5 to 10 seconds). For using the SAM, there is little chance of confusion with terms as in verbal assessments. The SAM was also reported to be capable of indexing cross-cultural results [49] and the results obtained using Semantic Differential scale (the verbal scale provided in [50]). The SAM that we used was identical to the 9-point rating scale version of SAM that was used in [42], in which the SAM ranges from a smiling, happy figure to a frowning, unhappy figure when representing the affective valence dimension. On the other hand, for the arousal dimension, the SAM ranges from an excited, wide-eyed figure to a relaxed, sleepy figure. The SAM ratings in the current study were scored such that 9 represented a high rating on each dimension (i.e. positive valence, high arousal), and 1 represented a low rating on each dimension (i.e. negative valence, low arousal).
During the experiment, a subject wore earphones (Sennheiser PC160SK Stereo Headset) and sat on an office chair (0.50 x 0.51 m, height 0.43 m), in a small, quiet office (7.6 x 3.2 m) without people. The office was with window and the ventilation was guaranteed. The computer system (acer Veriton M2610, processor: Intel Core i3-2120 3.3G/3M/65W, memory: 4GB DDR3-1066, operating system: Microsoft Windows 7 Professional 64bit) used by the subject was put under a desk (0.70 x 1.26 m, height 0.73 m). The subject was seated approximately 0.66 m from the computer screen (ViewSonic VE700, 17 inch, 1280 x 1024 in resolution). The keyboard used by the subject was an acer KU-0355 (18.2 x 45.6 cm, normal keyboard with the United States layout, typically used for Windows operating system) connected to the computer system used through USB 2.0 communication interface. The distance between the center of adjacent keys (size: 1.2 x 1.2 cm) of the number pad used was 2 cm. Keyboard lifts (the two small supports at the back of the keyboard) which raise the back of the keyboard for 0.8 cm when used, were not used in this experiment. The subject was sat approximately 0.52 m from the center of the number pad (i.e. the digit “5” of the number pad). The keystroke collection software was developed using C# project built by using Visual Studio 2008 and was executed on the. NET framework (version 3.5) platform. The reason of using C# programming language in developing this software was that the language provides more sufficient Application Programming Interfaces (APIs) for utilizing the function of keystroke-interrupt detection in Microsoft Windows operation systems than other programming languages such as R, Matlab, Java, and Python.
In total, 63 (trials) x 52 (subjects) = 3,276 rows of the raw data were collected during the experiment. However, 117 (3.6% of the 3276 samples) rows of the raw data were excluded because the SAM rating was not completed. In our analysis, a sequence typed is a "correctly typed sequence" if the target typing text was correctly typed and “incorrectly typed sequence” if incorrectly typed. For instance, if a subject typed “7485961342”, of which the “4” at the 9th digit is misplaced, the sequence typed was considered as an incorrectly typed sequence. A pre-processing routine was applied to the raw data to separate all the correctly typed sequences from incorrectly typed sequences. Keystroke duration and keystroke latency features were only extracted from the correctly typed sequences (91.2% of the 3,024 samples). The keystroke duration is the time that elapsed from the key press to the key release, whereas the keystroke latency is the time that elapsed from one key release to the next key press [51]. The extracted keystroke duration and keystroke latency features were submitted to two two-way 3 (Valence: negative, neutral, and positive) x 3 (Arousal: low, medium, and high) Repeat Measures ANOVAs [52], respectively. To analyze the accuracy rate of keyboard typing, the accuracy data (0 for incorrectly typed sequence and 1 for correctly typed sequence) of all the typed sequences was submitted to a two-way 3 (Valence: negative, neutral, and positive) x 3 (Arousal: low, medium, and high) Repeat Measures ANOVA. Post-hoc analysis was conducted using multiple t-tests with Bonferroni correction. The 9-point scale SAM ratings of the valence and arousal were translated into three levels of the ANOVA factor Valence and Arousal. Eleven subjects were excluded from the application of the Repeat Measures ANOVA (leaving 2,583 rows of the raw data) because of having numerous empty cells. These subjects reported a small range of changes in SAM ratings (i.e. unsuccessful emotion elicitation) throughout the experiment, which leaded to empty cells. Specifically, we removed these 11 subjects that contain over 3 empty cells (missing values) in a 3 (Valence: negative, neutral, and positive) x 3 (Arousal: low, medium, and high) table. The decision of not to impute them was because of that the research objectives of the current study were to examine the keystroke dynamics in the 3 x 3 emotional conditions, of which the multiple imputations may lead to unreliable results. Notably, the 11 subjects that were removed from the analysis contain 6, 6, 6, 5, 4, 4, 4, 3, 3, 3, and 3 empty cells. The ANOVA results of the dataset that included these subjects by imputing all the missing values by using average values are also presented in the result section, next to the ANOVA result of the dataset that with these subjects excluded. The significance level α of the entire statistical hypothesis tests used in this paper was set to 0.05.
|