Recently, in one of my economics classes, the professor decided to have us play a game theory scenario. The first interesting factor to know for context, is that the class consists of a minority of economics majors. The second factor is that the class consists of over 20 students (decently sized for a Puget Sound econ course). The game we played was a basic variable contribution mechanism game, with the class splitting into groups of around 3 and given the opportunity to contribute ‘tokens’ to a group pot to get points for the entire class or keep the ‘tokens’ to earn only points for the individual groups. The game as stated, functioned as a fairly vanilla version of the classic economics VCM game, with one Nash Equilibrium and one socially optimal outcome (See figure 1). However, the professor then added a caveat that changed the game dramatically: the reward for getting the most points would be two extra-credit points in the class and any tied groups would receive the points as well. This changed everything as suddenly a salient incentive to win had been added to the game, causing many of the students to sit up and listen a little harder. What this salient incentive had done (besides energize the class) was create a second point at which everyone could win, a second socially optimal outcome (given that this class is not curved) (see figure 2). As the game started I wondered whether the IPE majors, younger econ majors and various other majors, would see that the best stragetic outcome for them was to hold fast to the Nash equilibrium and not contribute any ‘tokens’. All but one group kept all their ‘tokens’, meaning that every group except for one was on track to obtain the extra credit points. The game was again changed as the professor then allowed the groups to talk to each other and attempt to coordinate, leading one group to argue that if we all gave ‘tokens’ everyone would win. In that moment I knew one thing: they were all about to lose. Finally as the game finished 4 of the 7 groups had given ‘tokens’ in an attempt to get points for the whole class, meaning only 3 groups had won. After class a friend of mine who was experiencing his first economics game asked an older econ major and I why we hadn’t contributed even though it would have lead to everyone winning. My friend responded by saying that strategically speaking it was smarter to stick to holding ‘tokens’ because the expected outcome was an 100% chance of winning. I responded the same. The younger econ major then argued that we gained no benefit from not helping everyone to win. The clear answer to this is that the benefit we gained besides the expected value was utility from knowing we would win no matter the choices of other groups. The younger econ major was still not satisfied, as he pointed to another member of the class who had argued for everyone to work together and then had in the end not contributed any ‘tokens’. This caused me to think, not only about the motives of this other member of the class, but also my own. The Utility I gained from watching people who may or may not have known better make choices that directly led to them losing the chance for two extra points was a moment of Schadenfreude. The fact that one member of our class got more than half of the class to lose despite the fact that it gained them nothing: pure Schadenfreude. I think that the truth is that everyone has seen Schadenfreude become so great that it changes the opportunity cost of someone’s decision in almost a complete 180. Every time you see someone throw a game just to make another person lose: Schadenfreude. Every time a tv/movie character choices revenge over a clearly better alternative: Schadenfreude. Every time someone shares the wrong info in a class that doesn’t have a curve: Schadenfreude. While not all our decisions come down to simple factor of pleasure from others people’s pain, it is often a surprising and unexpected factor that can lead to seemingly irrational decisions.