check
Publications | The Federmann Center for the Study of Rationality

Publications

2007
Karni, Edi . Bayesian Decision Theory And The Representation Of Beliefs. Discussion Papers 2007. Web. Publisher's VersionAbstract
In this paper, I present a Bayesian decision theory and define choice-based subjective probabilities that faithfully represent Bayesian decision makers prior and posterior beliefs regarding the likelihood of the possible effects contingent on his actions. I argue that no equivalent results can be obtained in Savage s (1954) subjective expected utility theory and give an example illustrating the potential harm caused by ascribing to a decision maker subjective probabilities that do not represent his beliefs.
Abba M. Krieger, Moshe Pollak, and Ester Samuel-Cahn. Beat The Mean: Better The Average. Discussion Papers 2007. Web. Publisher's VersionAbstract
We consider a sequential rule, where an item is chosen into the group, such as a university faculty member, only if his score is better than the average score of those already belonging to the group. We study four variables: The average score of the members of the group after k items have been selected, the time it takes (in terms of number of observed items) to assemble a group of k items, the average score of the group after n items have been observed, and the number of items kept after the first n items have been observed. We develop the relationships between these variables, and obtain their asymptotic behavior as k (respectively, n) tends to infinity. The assumption throughout is that the items are independent, identically distributed, with a continuous distribution. Though knowledge of this distribution is not needed to implement the selection rule, the asymptotic behavior does depend on the distribution. We study in some detail the Exponential, Pareto and Beta distributions. Generalizations of the "better than average" rule to the ² better than average rules are also considered. These are rules where an item is admitted to the group only if its score is better than ² times the present average of the group, where ² > 0.
Zapechelnyuk, Andriy . Better-Reply Strategies With Bounded Recall. Discussion Papers 2007. Web. Publisher's VersionAbstract
A decision maker (an agent) is engaged in a repeated interaction with Nature. The objective of the agent is to guarantee to himself the long-run average payoff as large as the best-reply payoff to Nature’s empirical distribution of play, no matter what Nature does. An agent with perfect recall can achieve this objective by a simple better-reply strategy. In this paper we demonstrate that the relationship between perfect recall and bounded recall is not straightforward: An agent with bounded recall may fail to achieve this objective, no matter how long recall he has and no matter what better-reply strategy he employs.
Feldman, Yuval Emek, and Michal. Computing An Optimal Contract In Simple Technologies. Discussion Papers 2007. Web. Publisher's VersionAbstract
We study an economic setting in which a principal motivates a team of strategic agents to exert costly effort toward the success of a joint project. The action taken by each agent is hidden and affects the (binary) outcome of the agent's individual task stochastically. A Boolean function, called technology, maps the individual tasks' outcomes into the outcome of the whole project. The principal induces a Nash equilibrium on the agents' actions through payments that are conditioned on the project's outcome (rather than the agents' actual actions) and the main challenge is that of determining the Nash equilibrium that maximizes the principal's net utility, referred to as the optimal contract. Babaioff, Feldman and Nisan [1] suggest and study a basic combinatorial agency model for this setting. Here, we concentrate mainly on two extreme cases: the AND and OR technologies. Our analysis of the OR technology resolves an open question and disproves a conjecture raised in [1]. In particular, we show that while the AND case admits a polynomial-time algorithm, computing the optimal contract in the OR case is NP-hard. On the positive side, we devise an FPTAS for the OR case, which also sheds some light on optimal contract approximation of general technologies.
Ullmann-Margalit, Edna . Difficult Choices: To Agonize Or Not To Agonize?. Discussion Papers 2007. Web. Publisher's VersionAbstract
What makes a choice difficult, beyond being complex or difficult to calculate? Characterizing difficult choices as posing a special challenge to the agent, and as typically involving consequences of significant moment as well as clashes of values, the article proceeds to compare the way difficult choices are handled by rational choice theory and by the theory that preceded it, Kurt Lewin's "conflict theory." The argument is put forward that within rational choice theory no choice is in principle difficult: if the object is to maximize some value, the difficulty can be at most calculative. Several prototypes of choices that challenge this argument are surveyed and discussed (picking, multidimensionality, "big decisions" and dilemmas); special attention is given to difficult choices faced by doctors and layers. The last section discusses a number of devices people employ in their attempt to cope with difficult choices: escape, "reduction" to non-difficult choices, and second-order strategies.
Kareev, Judith Avrahami, and Yaakov. Distribution Of Resources In A Competitive Environment. Discussion Papers 2007. Web. Publisher's VersionAbstract
When two agents of unequal strength compete, the stronger one is expected to always win the competition. This expectation is based on the assumption that evaluation of performance is flawless. If, however, the agents are evaluated on the basis of only a small sample of their performance, the weaker agent still stands a chance of winning occasionally. A theoretical analysis indicates that for this to happen, the weaker agent must introduce variability into the effort he or she invests in the behavior, such that on some occasions the weaker agent's level of performance is as high as that of the stronger agent, whereas on others it is . This, in turn, would drive the stronger agent to introduce variability into his or her behavior. We model this situation in a game, present its game-theoretic solution, and report an experiment, involving 144 individuals, in which we tested whether players are actually sensitive to their relative strengths and know how to allocate their resources given those relative strengths. Our results indicate that they do.
Hart, Sergiu, and Benjamin Weiss. Evolutionarily Stable Strategies Of Random Games, And The Vertices Of Random Polygons. Discussion Papers 2007. Web. Publisher's VersionAbstract
An evolutionarily stable strategy (ESS) is an equilibrium strategy that is immune to invasions by rare alternative ("mutant") strategies. Unlike Nash equilibria, ESS do not always exist in finite games. In this paper, we address the question of what happens when the size of the game increases: does an ESS exist for "almost every large" game? Letting the entries in the n x n game matrix be randomly chosen according to an underlying distribution F, we study the number of ESS with support of size 2. In particular, we show that, as n goes to infinity, the probability of having such an ESS: (i) converges to 1 for distributions F with "exponential and faster decreasing tails" (e.g., uniform, normal, exponential); and (ii) it converges to 1 - 1/sqrt(e) for distributions F with "slower than exponential decreasing tails" (e.g., lognormal, Pareto, Cauchy). Our results also imply that the expected number of vertices of the convex hull of n random points in the plane converges to infinity for the distributions in (i), and to 4 for the distributions in (ii).
Yaakov Kareev, Klaus Fiedler, and Judith Avrahami. Expected Prediction Accuracy And The Usefulness Of Contingencies. Discussion Papers 2007. Web. Publisher's VersionAbstract
Regularities in the environment are used to decide what course of action to take and how to prepare for future events. Here we focus on the utilization of regularities for prediction and argue that the commonly considered measure of regularity - the strength of the contingency between antecedent and outcome events - does not fully capture the goodness of a regularity for predictions. We propose, instead, a new measure - the level of expected prediction accuracy (ExpPA) - which takes into account the fact that, at times, maximal prediction accuracy can be achieved by always predicting the same, most prevalent outcome, and in others, by predicting one outcome for one antecedent and another for the other. Two experiments, testing the ExpPA measure in explaining participants' behavior, found that participants are sensitive to the twin facets of ExpPA and that prediction behavior is best explained by this new measure.
Hart, Sergiu . Five Questions On Game Theory. Discussion Papers 2007. Web. Publisher's Version
Procaccia, Bezalel Peleg, and Ariel D. Implementation By Mediated Equilibrium. Discussion Papers 2007. Web. Publisher's VersionAbstract
Implementation theory tackles the following problem: given a social choice correspondence, find a decentralized mechanism such that for every constellation of the individuals' preferences, the set of outcomes in equilibrium is exactly the set of socially optimal alternatives (as specified by the correspondence). In this paper we are concerned with implementation by mediated equilibrium; under such an equilibrium, a mediator coordinates the players' strategies in a way that discourages deviation. Our main result is a complete characterization of social choice correspondences which are implementable by mediated strong equilibrium. This characterization, in addition to being strikingly concise, implies that some important social choice correspondences which are not implementable by strong equilibrium are in fact implementable by mediated strong equilibrium.
Medina, Ehud Guttel, and Barak. Less Crime, More (Vulnerable) Victims: Game Theory And The Distributional Effects Of Criminal Sanctions. Discussion Papers 2007. Web. Publisher's VersionAbstract
Harsh sanctions are conventionally assumed to primarily benefit vulnerable targets. Contrary to this perception, this article shows that augmented sanctions often serve the less vulnerable targets. While decreasing crime, harsher sanctions also induce the police to shift enforcement efforts from more to less vulnerable victims. When this shift is substantial, augmented sanctions exacerbate–rather than reduce–the risk to vulnerable victims. Based on this insight, this article suggests several normative implications concerning the efficacy of enhanced sanctions, the importance of victims' funds,and the connection between police operations and apprehension rates.
Procaccia, Bezalel Peleg, and Ariel D. Mediators Enable Truthful Voting. Discussion Papers 2007. Web. Publisher's VersionAbstract
The Gibbard-Satterthwaite Theorem asserts the impossibility of designing a non-dictatorial voting rule in which truth-telling always constitutes a Nash equilibrium. We show that in voting games of complete information where a mediator is on hand, this troubling impossibility result can be alleviated. Indeed, we characterize families of voting rules where, given a mediator, truthful preference revelation is always in strong equilibrium. In particular, we observe that the family of feasible elimination procedures has the foregoing property.
Weiss, Gusztav Morvav, and Benjamin, Nathans. On Sequential Estimation And Prediction For Discrete Time Series. Discussion Papers 2007. Web. Publisher's VersionAbstract
The problem of extracting as much information as possible from a sequence of observations of a stationary stochastic process X0,X1, ¦,Xn has been considered by many authors from different points of view. It has long been known through the work of D. Bailey that no universal estimator for P(Xn+1|X0,X1, ...Xn) can be found which converges to the true estimator almost surely. Despite this result, for restricted classes of processes, or for sequences of estimators along stopping times, universal estimators can be found. We present here a survey of some of the recent work that has been done along these lines.
Rinott, Micha Mandel, and Yosef. On Statistical Inference Under Selection Bias. Discussion Papers 2007. Web. Publisher's VersionAbstract
This note revisits the problem of selection bias, using a simple binomial example. It focuses on selection that is introduced by observing the data and making decisions prior to formal statistical analysis. Decision rules and interpretation of confidence measure and results must then be taken relative to the point of view of the decision maker, i.e., before selection or after it. Such a distinction is important since inference can be considerably altered when the decision maker's point of view changes. This note demonstrates the issue, using both the frequentist and the Bayesian paradigms.
Venezia, Zur Shapira, and Itzhak. On The Preference For Full-Coverage Policies: Why Do People Buy Too Much Insurance?. Discussion Papers 2007. Web. Publisher's VersionAbstract
One of the most intriguing questions in insurance is the preference of consumers for low or zero deductible insurance policies. This stands in sharp contrast to a theorem proved by Mossin, 1968, that under quite common assumptions when the price of insurance is higher than its actuarial value, then full coverage is not optimal.We show in a series of experiments that amateur subjects tend to underestimate the value of a policy with a deductible and that the degree of underestimation increases with the size of the deductible. We hypothesize that this tendency is caused by the anchoring heuristic. In particular, in pricing a policy with a deductible subjects first consider the price of a full coverage policy. Then they anchor on the size of the deductible and subtract it from the price of the full coverage policy. However, they do not adjust the price enough upward to take into account the fact that there is only a small chance that the deductible will be applied toward their payments. We also show that professionals in the field of insurance are less prone to such a bias. This implies that a policy with a deductible priced according to the true expected payments may seem overpriced  to the insured and therefore may not be purchased. Since the values of full coverage policies are not underestimated the insured may find them as relatively better deals .
Maya Bar-Hillel, David V. Budescu, and Moty Amar. Predicting World Cup Results: Do Goals Seem More Likely When They Pay Off?. Discussion Papers 2007. Web. Publisher's VersionAbstract
In a series of experiments, Bar-Hillel and Budescu (1995) failed to find a desirability bias in probability estimation. The World Cup soccer tournament (of 2002 and 2006) provided an opportunity to revisit the phenomenon, in a context where wishful thinking and desirability bias are notoriously rampant (e.g., Babad, 1991). Participants estimated the probabilities of various teams to win their upcoming games. They were promised money if one particular team, randomly designated by the experimenter, would win its upcoming game. Participants judged their target team more likely to win than other participants, whose promised monetary reward was contingent on the victory of its rival team. Prima facie this seems to be a desirability bias. However, in a follow-up study we made one team salient, without promising monetary rewards, by simply stating that it is "of special interest". Again participants judged their target team more likely to win than other participants, whose "team of special interest" was the rival team. Moreover, the magnitude of the two effects was very similar. On grounds of parsimony, we conclude that what seemed like a desirability bias may just be a salience/marking effect, and – though optimism is a robust and ubiquitous human phenomenon – wishful thinking still remains elusive.In 2008, a shorter version of this paper was published under the title Wishful thinking in predicting world cup results as chapter 2 of Rationality and Social Responsibility (J. Krueger, ed.), 175-186.  In the link todp448, it follows the version published in Psychonomic Bulletin and Review.
Metge, Jens . Protecting The Domestic Market: Industrial Policy And Strategic Firm Behaviour. Discussion Papers 2007. Web. Publisher's VersionAbstract
Foreign firms to break into a new market commonly undercut domestic prices and, hence, subsidise the consumer's costs of switching in order to get a positive market share. However, this may constitute the act of dumping as drawn in Article VI of the General Agreement on Tariffs and Trade (GATT). Consequently, domestic firms trying to protect themselves against potential competitors often demand an anti-dumping (AD) investigation. In a two-period model of market entry with horizontally differentiated products and exogenous switching costs, it is demonstrated that the mere existence of switching costs and AD-rules may result in an anti-competition effect: the administratively set minimum-price rule protects the domestic firm and yields larger prices. Therefore, there are some consumers who will not buy either product in both periods although they would have done so in absence of AD. Consequently, competition policy should reassess the AD-regulation.
Lehmann, Daniel . Quantic Superpositions And The Geometry Of Complex Hilbert Spaces. Discussion Papers 2007. Web. Publisher's VersionAbstract
The concept of a superposition is a revolutionary novelty introduced by Quantum Mechanics. If a system may be in any one of two pure states x and y, we must consider that it may also be in any one of many superpositions of x and y. This paper proposes an in-depth analysis of superpositions. It claims that superpositions must be considered when one cannot distinguish between possible paths, i.e., histories, leading to the current state of the system. In such a case the resulting state is some compound of the states that result from each of the possible paths. It claims that states can be compounded, i.e., superposed in such a way only if they are not orthogonal. Since different classical states are orthogonal, the claim implies no non-trivial superpositions can be observed in classical systems. It studies the parameters that define such compounds and finds two: a proportion defining the mix of the different states entering the compound and a phase difference describing the interference between the different paths. Both quantities are geometrical in nature: relating one-dimensional subspaces in complex Hilbert spaces. It proposes a formal definition of superpositions in geometrical terms. It studies the properties of superpositions.
Heifetz, Elchanan Ben-Porath, and Aviad. Rationalizable Expectations. Discussion Papers 2007. Web. Publisher's VersionAbstract
Consider an exchange economy with asymmetric information. What is the set of outcomes that are consistent with common knowledge of rationality and market clearing?We propose the concept of CKRMC as an answer to this question. The set of price functions that are CKRMC is the maximal set F with the property that every fˆˆF defiłdots}nes prices that clear the markets for demands that can be rationalized by some profile of subjective beliefs on F. Thus, the difference between CKRMC and Rational Expectations Equilibrium (REE) is that CKRMC allows for a situation where the agents do not know the true price function and furthermore may have different beliefs about it. We characterize CKRMC, study its properties, and apply it to a general class of economies with two commodities. CKRMC manifests intuitive properties that stand in contrast to the full revelation property of REE. In particular, we obtain that for a broad class of economies: (1) There is a whole range of prices that are CKRMC in every state. (2) The set of CKRMC outcomes is monotonic with the amount of information in the economy.
Tamar Keasar, Adi Sadeh, and Avi Shmida. Signaling Function Of An Extra-Floral Display: What Selects For Signal Development?, The. Discussion Papers 2007. Web. Publisher's VersionAbstract
The vertical inflorescences of the Mediterranean annual Salvia viridis carry many small, colorful flowers, and are frequently terminated by a conspicuous tuft of colorful leaves ("flags") that attracts insect pollinators. Insects may use the flags as indicators of the food reward in the inflorescences, as long-distance cues for locating and choosing flowering patches, or both. Clipping of flags from patches of inflorescences in the field significantly reduced the number of pollinators that arrived at the patches, but not the total number of inflorescences and flowers visited by them. The number of flowers visited per inflorescence significantly increased with inflorescence size, however. Inflorescence and flower visits rates signific antly increased with patch size when flags were present, but not after flag removal. 6% of the plants in the study population did not develop any flag during blooming, yet suffered no reduction in seed set as compared to flag-bearing neighboring individuals. These results suggest that flags signal long-distance information to pollinators (perhaps indicating patch location or size), while flower-related cues may indicate inflorescence quality.Plants that do not develop flags probably benefit from the flag signals displayed by their neighbors, without bearing the costs of flag production. Thus, flagproducing plants can be viewed as altruists that enhance their neighbors' fitness. Greenhouse-grown S. viridis plants allocated '' 0.5% of their biomass to flag production, and plants grown under water stress did not reduce their biomassallocation to flags as compared to irrigated controls. These findings suggest that the expenses of flag production are modest, perhaps reducing the cost of altruism. We discuss additional potential evolutionary mechanisms that may select for the maintenance of flag production.