PropertyValue
is nif:broaderContext of
nif:broaderContext
is schema:hasPart of
schema:isPartOf
nif:isString
  • The study has been conducted according to the principles expressed in the Declaration of Helsinki. The design of the study was approved by the ethical committee of the Psychology Department of the University of Geneva, and all participants gave written informed consent before participating in the study. Thirty-six students (18 males, mean age 24 years old, range 18–33) were recruited through advertisement at the University of Geneva to participate simultaneously in a “game involving repeated anonymous computer-mediated interactions with other persons” in return for payment that would depend on their behavior during the experiment. Participants who asked for more information were told that they would be required to make a series of decisions, that their decisions would directly influence their payoff as well as the payoff of others, and that similarly, the decisions of others would affect their own as well as others' payoffs. The instructions and questionnaires were similar to those used in Bediou et al. [28] and are briefly outlined in the procedure below. In this study, we used ratings of the intensity of the emotions that participants experienced as a consequence of the offer. To assess these emotional experiences, we used the Geneva Emotion Wheel (GEW), which allows us to obtain ratings on 20 emotion categories and their intensity (five levels) at the same time. The 20 emotional terms are organized according to two underlying dimensions of power (or control, potency) and valence (or pleasure, goal conduciveness), defining four different quadrants (see appendix): positive/high power (surprise, pride, joy, satisfaction, and pleasure), positive/low power (content, sympathy, admiration, relief, and compassion), negative/low power (sadness, guilt, regret, shame, and disappointment), and negative/high power (anxiety, disgust, contempt, envy, and anger). A detailed description of the instrument can be found in Scherer et al. [40]. Individual differences in social value orientation were assessed with a computerized version of SVO, which was administered to all subjects before they received further instructions about the task. Experimental constraints prevented us from administrating the SVO in advance, which could thus have produced carry-over effects on behavior during the experiment. Future studies should try to separate these two measures in time or at least counterbalance their order. We used the classic nine-item triple dominance measure [39], which classifies individuals as cooperators, individualists, or competitors if they make six of nine choices consistent with a certain SVO. Here we used the alternative scoring method, according to which an individual who makes six choices that are consistent with either an individualistic or competitive orientation would be classified (more broadly) as pro-self [41], [42]. Because 11 of the 12 pro-selfs were primarily classified as individualists, we refer to this group as individualists (N = 12), in opposition to prosocials (N = 18). On arrival in the laboratory, participants were divided into two groups and directed to different rooms. Participants in each room were then told that they would participate in 28 independent interactions, each with a different and anonymous partner. Half of these partners would be students from the same university, all located in the other room (University of Geneva, 14 trials), whereas the other half would be students from a different university (University of Zürich, 14 trials). We then instructed the participants that each interaction would consist of two phases: a production phase and a distribution phase. In reality, there was no interaction between participants; all participants were always paired with a computer. The information about interactions with participants from Zurich was used as a cover story to allow multiple unique and anonymous interactions per participant, without reducing the believability of the interaction manipulation. Due to organizational constraints, we could not test two groups of 28 participants simultaneously, which would have been necessary to simulate the 28 interactions with a different anonymous “confederate” for each of our 36 actual participants. Subjects were informed that the outcomes of all interactions (i.e. the responders' decisions) would not be revealed until the end of the experiment and that they would be paid according to these interaction outcomes. In the production phase, both players answered as many simple math calculations as possible in a limited amount of time in order to put more money into a shared pie. This phase allowed us to introduce heterogeneous contributions to the shared pie. In order to simulate the low and high contribution conditions, we used false feedback about the other (virtual) players' performance, which was based on the actual performance of the participant, plus or minus an additional random variation to increase realism of the online visual feedback. In half of the trials, the (virtual) other players' contribution was programmed to correspond to between 20% and 30% of the total shared pie (high contribution, 14 trials), whereas in the other half of the trials (low contribution, 14 trials), the virtual player was programmed to contribute between 70% and 80% of the of the total shared pie. In the distribution phase, the money that had been produced by the participant and his or her presumed partner had to be distributed according to the rules of a UG, which were carefully explained to them. First, an ostensibly random allocation procedure determined who would be the proposer, and who would the responder, independently of their respective contributions. Proposers (12 trials) were asked to move a cursor between zero and the total amount of the pie in order to make an offer. On responder trials (N = 16) the computer was programmed to present, after a random delay (500–4500 ms), one of four different types of offers: equal offers (50% of the pie), equitable offers (based on contributions), unfair offers (10% of the pie), and hyperfair offers (90% of the pie). Each offer was presented only once in the low contribution conditions, and once in the high contribution, in a pseudo-randomized order. Responders were first asked to accept or reject each offer (in which case both players got nothing), and then to rate the fairness of each offer, as well as their emotional reaction to each offer. Thus, three types of measures were collected: participants' behavior (proposers' offers and responders' acceptance decision), their judgments (responders' ratings of the fairness of the offers using a 7-point scale), and their emotions (responders' self-reports on the GEW). Additional measures included participants' self-ratings of effort invested in the production phase and their satisfaction. Note that to avoid priming effects and to make the task easier, the order of the decision (accept/reject) and ratings (fairness and emotions) was kept constant both within and across subjects. Future studies may consider inverting this order to examine potential effects on a behavior/emotion discrepancy. Before the game started, participants' attention was drawn to the following points that were written on a board and thus always visible: (i) All interactions are independent and anonymous. (ii) For each interaction, the allocation of roles is independent from the contributions. (iii) The responders' decisions will not be communicated to the proposers before the end of the experiment. (iv) You are playing for real money: The money that you will earn today depends on your decisions and the other players' decisions. After the experiment, subjects were debriefed individually and paid according to their interaction outcomes. Each participant was shown a summary table of all his/her interactions, including proposers' offers and responders' decisions. The outcomes of all these interactions was summed and added to a 5 CHF “show up” fee to determine participant payment. For proposer trials, we used a rule to realistically simulate responders' behavior in the ultimatum game, based on the literature and on the results of our previous study. All offers above 20% of the shared pie or above the participant's contribution were accepted. Offers below the contribution and below 20% of the shared pie were rejected. The average payment was CHF 72 (SD = 25, range 35–122). During debriefing, we explained the different experimental manipulations in detail and the reasons for them. Subjects were allowed to ask questions. None of the subjects suspected the reality of the interactions, and none of them reported negative feelings regarding their participation in the study.
rdf:type