check
Publications | The Federmann Center for the Study of Rationality

Publications

2001
David Assaf, Larry Goldstein, and Ester Samuel-Cahn. Ratio Prophet Inequalities When The Mortal Has Several Choices. Discussion Papers 2001. Web. Publisher's VersionAbstract
Let X_i be non-negative, independent random variables with finite expectation, and X*_n=maxX_1,...,X_n. The value EX*_n is what can be obtained by a "prophet". A "mortal" on the other hand, may use k>=1 stopping rules t_1,...,t_k, yielding a return of E[max_i=1,...,k X_t_i]. For n>=k the optimal return is V^n_k(X_1,...,X_n)=supE[max_i=1,...,k X_t_i] where the supremum is over all stopping rules t_1,...t_k such that P(t_i
Aumann, Robert J. . Rationale For Measurability, The. Discussion Papers 2001. Web. Publisher's VersionAbstract
When modelling large economies by nonatomic measure spaces of agents, one defines "coalitions" as measurable - not arbitrary - sets of agents. Here we suggest a rationale for this restriction: "Real" economies have finitely many agents. In them, coalitions are associated with various measures, like total endowment, which play a vital role in the analysis. So in the model, too, one should be able to associate similar measures with coalitions; this means that they must be "measurable." Thus, though in the finite case a coalition is simply an arbitrary set of players, the appropriate generalization to the infinite case is not an arbitrary but a measurable set.
Gil Kalai, Ariel Rubinstein, and Ran Spiegler. Rationalizing Choice Functions By Multiple Rationales. Discussion Papers 2001. Web. Publisher's VersionAbstract
The paper presents a notion of rationalizing choice functions that violate the Independence of Irrelevant Alternatives  axiom. A collection of linear orderings is said to provide a rationalization by multiple rationales for a choice function if the choice from any choice set can be rationalized by one of the orderings. We characterize a tight upper bound on the minimal number of orderings that is required to rationalize arbitrary choice functions, and calculate the minimal number for several specific choice procedures.
Neyman, Abraham . Real Algebraic Tools In Stochastic Games. Discussion Papers 2001. Web. Publisher's VersionAbstract
The present chapter brings together parts of the theory of polynomial equalities and inequalities used in the theory of stochastic games. The theory can be considered as a theory of polynomial equalities and inequalities over the field of real numbers or the field of real algebraic numbers or more generally over an arbitrary real closed field.
Sudholter, Gooni Orshan, and Peter. Reconfirming The Prenucleolus. Discussion Papers 2001. Web. Publisher's VersionAbstract
By means of an example it is shown that the prenucleolus is not the only minimal solution that satisfies nonemptiness, Pareto optimality, covariance, the equal treatment property and the reduced game property, even if universe of players is infinite. This example also disproves a conjecture of Gurvich et al. Moreover, we prove that the prenucleolus is axiomatized by nonemptiness, covariance, the equal treatment property, and the reconfirmation property, provided the universe of players is infinite.
Winter, Eyal . Scapegoats And Optimal Allocation Of Responsibility. Discussion Papers 2001. Web. Publisher's VersionAbstract
We consider a model of hierarchical organizations in which agents have the option ofreducing the probability of failure by investing towards their decisions. A mechanismspecifies a distribution of sanctions in case of failure across the levels of the hierarchy. Itis said to be investment-inducing if it induces all agents to invest in equilibrium. It is saidto be optimal if it does so at minimal total punishment. We characterize optimalinvestment-inducing mechanisms in several versions of our benchmark model. Inparticular we refer to the problem of allocating individuals with diverse qualifications todifferent levels of the hierarchy as well as allocating tasks of different importance acrossdifferent hierarchy levels. We also address the issue of incentive-optimal hierarchyarchitectures.
Maya Bar-Hillel, Yigal Attali . Seek Whence: Answer Sequences And Their Consequences In Key-Balanced Multiple-Choice Tests. Discussion Papers 2001. Web. Publisher's VersionAbstract
The professional producers of such wide-spread high-stakes tests as the SAT have a policy of balancing, rather than randomizing, the answer keys of their tests. Randomization yields answer keys that are, on average, balanced, whereas a policy of deliberate balancing assures this desirable feature not just on average, but in every test. This policy is a well-kept trade secret, and apparently has been successfully kept as such, since there is no evidence of any awareness on the part of test takers and the coaches that serve them that this is an exploitable feature of answer keys. However, balancing leaves an identifiable signature on answer keys, thus not only jeopardizing the secret, but also creating the opportunity for its exploitation. The present paper presents the evidence for key balancing, the traces this practice leaves in answer keys, and the ways in which testwise test takers can exploit them. We estimate that such test takers can add between 10 and 16 points to their final SAT score, on average, depending on their knowledge level. The secret now being out of the closet, the time has come for test makers to do the right thing, namely to randomize, not balance, their answer keys.'Following the link to the published version ofdp252, an earlier, but fuller,'version'is included.'
Geanakoplos, Pradeep Dubey, and John. Signalling And Default: Rothschild-Stiglitz Reconsidered. Discussion Papers 2001. Web. Publisher's VersionAbstract
In our previous paper we built a general equilibrium model of default and punishment in which equilibrium always exists and endogenously determines asset promises, penalties, and sales constraints. In this paper we interpret the endogenous sales constraints as equilibrium signals. By specializing the default penalties and imposing an exclusivity constraint on asset sales, we obtain a perfectly competitive version of the Rothschild-Stiglitz model of insurance. In our model their separating equilibrium always exists even when they say it doesn't.
Neyman, Abraham . Singular Games In Bv'na. Discussion Papers 2001. Web. Publisher's VersionAbstract
Every simple monotonic game in bv'NA is a weighted majority game. Every game v in bv'NA has a representation v=u+sum_i=1^inftyf_i o mu_i where u in pNA, mu_i in NA^1 and f_i is a sequence of bv' functions with sum_i=1^infty||f_i||
Kalai, Gil . Social Choice And Threshold Phenomena. Discussion Papers 2001. Web. Publisher's VersionAbstract
Arrow's theorem asserts that under certain conditions every non-dictatorial social choice function leads to nonrational social choice for some profiles. In other words, for the case of non-dictatorial social choice if we observe that the society prefers alternative A over B and alternative B over C we cannot deduce what its choice will be between B and C. Here we ask whether we can deduce anything from observing a sample of the society's choices on the society's choice in other cases? We prove that the answer is ``no' for large societies for neutral and monotonic social choice function such that the society's choice is not typically determined by the choices of a few individuals. The proof is based on threshold properties of Boolean functions and on analysis of the social choice under some probabilistic assumptions on the profiles. A similar argument shows that under the same conditions for the social choice function but under certain other probabilistic assumptions on the profiles the social choice function will typically lead to rational choice for the society.
Winter, Igal Milchtaich, and Eyal. Stability And Segregation In Group Formation. Discussion Papers 2001. Web. Publisher's VersionAbstract
This paper presents a model of group formation based on the assumption that individuals prefer to associate with people similar to them. It is shown that, in general, if the number of groups that can be formed is bounded, then a stable partition of the society into groups may not exist. A partition is defined as stable if none of the individuals would prefer be in a different group than the one he is in. However, if individuals characteristics are one-dimensional, then a stable partition always exists. We give sufficient conditions for stable partitions to be segregating (in the sense that, for example, low-characteristic individuals are in one group and high-characteristic ones are in another) and Pareto efficient. In addition, we propose a dynamic model of individual myopic behavior describing the evolution of group formation to an eventual stable, segregating, and Pareto efficient partition.
Peleg, Hans Keiding, and Bezalel. Stable Voting Procedures For Committees In Economic Environments. Discussion Papers 2001. Web. Publisher's VersionAbstract
A strong representation of a committee, formalized as a simple game, on a convex and closed set of alternatives is a game form with the members of the committee as players such that (i) the winning coalitions of the simple game are exactly those coalitions, which can get any given alternative independent of the strategies of the complement, and (ii) for any profile of continuous and convex preferences, the resulting game has a strong Nash equilibrium. In the paper, it is investigated whether committees have representations on convex and compact subsets of R^m. This is shown ot be the case if there are vetoers; for committees with no vetoers the existence of strong representations depends on the structure of the alternative set as well as on that of the committee (its Nakamura-number). Thus, if A is strictly convex, compact and has smooth boundary, then no committee can have a strong representation on A. On the other hand, if A has non-smooth boundary, representations may exist depending on the Nakamura-number (if it is at least 7).
Winter, Suresh Mutuswami, and Eyal. Subscription Mechanisms For Network Formation. Discussion Papers 2001. Web. Publisher's VersionAbstract
We analyze a model of network formation where the costs of link formation are publicly known but individual benefits are not known to the social planner. The objective is to design a simple mechanism ensuring efficiency, budget balance and equity. We propose two mechanisms towards this end; the first ensures efficiency and budget balance but not equity. The second mechanism corrects the asymmetry in payoffs through a two-stage variant of the first mechanism. We also discuss an extension of the basic model to cover the case of directed graphs and give conditions under which the proposed mechanisms are immune to coalitional deviations.
Segal, Uriel Procaccia, and Uzi. Super Majoritarianism And The Endowment Effect. Discussion Papers 2001. Web. Publisher's VersionAbstract
The American and some other constitutions entrench property rights by requiring super majoritarian voting as a condition for amending or revoking their own provisions. Following Buchanan and Tullock [5], this paper analyzes individuals' interests behind a veil of ignorance, and shows that under some standard assumptions, a (simple) majoritarian rule should be adopted. This result changes if one assumes that preferences are consistent with the behavioral phenomenon known as the "endowment effect." It then follows that (at least some) property rights are best defended by super majoritarian protection. The paper then shows that its theoretical results are consistent with a number of doctrines underlying American Constitutional Law.
Nir Dagan, Oscar Volij, and Eyal Winter. Time-Preference Nash Solution, The. Discussion Papers 2001. Web. Publisher's VersionAbstract
The primitives of a bargaining problem consist of a set, S, of feasible utility pairs and a disagree- ment point in it. The idea is that the set S is induced by an underlying set of physical outcomes which, for the purposes of the analysis, can be abstracted away. In a very influential paper Nash (1950) gives an axiomatic characterization of what is now the widely known Nash bargaining solution. Rubinstein, Safra, and Thomson (1992) (RST in the sequel) recast the bargaining problem into the underlying set of physical alternatives and give an axiomatization of what is known as the ordinal Nash bargaining solution. This solution has a very natural interpretation and has the interesting property that when risk preferences satisfy the expected utility axioms, it induces the standard Nash bargaining solution of the induced bargaining problem. This property justifies the proper name in the solution s appellation. The purpose of this paper is to give an axiomatic characterization of the rule that assigns the time-preference Nash outcome to each bargaining problem.
Ullmann-Margalit, Edna . Trust, Distrust, And In Between. Discussion Papers 2001. Web. Publisher's VersionAbstract
The springboard for this paper is the nature of the negation relation between the notions of trust and distrust. In order to explore this relation, an analysis of full trust is offered. An investigation follows of the ways in which this "end-concept" of full trust can be negated. In particular, the sense in which distrust is the negation of trust is focused on. An asymmetry is pointed to, between 'not-to-trust' and 'not-to-distrust'. This asymmetry helps explain the existence of a gap between trust and distrust: the possibility of being suspended between the two. Since both trust and distrust require reasons, the question that relates to this gap is what if there are no reasons, or at any rate no sufficient reasons, either way. This kind of situation, of being suspended between two poles without a sufficient reason to opt for any one of them, paradigmatically calls for a presumption. In the case in hand this means a call for either a rebuttable presumption in favor of trust or a rebuttable presumption in favor of distrust. In some of the literature on trust it seems to be taken almost for granted that generalized distrust is justifiable in a way that generalized trust is not. This would seem to suggest a straightforward recommendation for the presumption of distrust over the presumption of trust. Doubts are raised whether indeed it is justified to adopt this as a default presumption. The notion of soft distrust, which is introduced at this point as contrasted with hard distrust, contributes in a significant way to these doubts. The analysis offered throughout the paper is of individual and personal trust and distrust. As it stands, it would seem not to be directly applicable to the case of trusting or distrusting institutions (like the court or the police). The question is therefore raised, in the final section, whether and how the analysis of individual trust and distrust can be extended to institutional trust and distrust. A case is made that there is asymmetry here too: while it is a misnomer to talk of trusting institutions, talk of distrusting institutions is not.
Haimanko, Pradeep Dubey, and Ori, B. Unilateral Deviations With Perfect Information. Discussion Papers 2001. Web. Publisher's VersionAbstract
For extensive form games with perfect information, consider a learning process in which, at any iteration, each player unilaterally deviates to a best response to his current conjectures of others' strategies; and then updates his conjectures in accordance with the induced play of the game. We show that, for generic payoffs, the outcome of the game becomes stationary in finite time, and is consistent with Nash equilibrium. In general, if payoffs have ties or if players observe more of each others' strategies than is revealed by plays of the game, the same result holds provided a rationality constraints is imposed on unilateral deviations: no player changes his moves in subgames that he deems unreachable, unless he stands to improve his payoff there. Moreover, with this constraint, the sequence of strategies and conjectures also becomes stationary and yields a self-confirming equilibrium.
Hon-Snir, Shlomit . Utility Equivalence In Auctions. Discussion Papers 2001. Web. Publisher's VersionAbstract
Auctions are considered with a (non-symmetric) independent-private-value model of valuations. It shall be demonstrated that a utility equivalence principle holds for an agent if and only if such agent has a constant absolute risk-attitude.
Neyman, Abraham . Values Of Games With Infinitely Many Players. Discussion Papers 2001. Web. Publisher's VersionAbstract
The Shapley value is one of the basic solution concepts of cooperative gaem theory. It can be viewed as a sort of average or expected outcome, or as an a priori evaluation of the players' expected payoffs. The value has a very wide range of applications, particularly in economics and political science (see chapters 32, 33 and 34 in this Handbook). In many of these applications it is necessary to consider games that involve a large number of players. Often most of the players are individually insignificant, and are effective in the game only via coalitions. At the same time there may exist big players who retain the power to wield single-handed influence. A typical example is provided by voting among stockholders of a corporation, with a few major stockholders and an "ocean" of minor stockholders. In economics, one considers an oligopolistic sector of firms embedded in a large population of "perfectly competitive" consumers. In all of these cases, it is fruitful to model the game as one with a continuum of players. In general, the continuum consists of a non-atomic part (the "ocean"), along with (at most countably many) atoms. The continuum provides a convenient framework for mathematical analysis, and approximates the results for large finite games well. Also, it enables a unified view of games with finite, countable or oceanic player-sets, or indeed any mixture of these.
Hart, Sergiu . Values Of Perfectly Competitive Economies. Discussion Papers 2001. Web. Publisher's VersionAbstract
This chapter is devoted to the study of economic models with many agents, each of whom is relatively insignificant. These are referred to as perfectly competitive models. The basic economic concept for such models is the competitive (or Walrasian) equilibrium, which prescribes prices that make the total demand equal to the total supply, i.e., under which the "markets clear." The fact that each agent is negligible implies that he cannot singly affect the prices, and so he takes them as given when finding his optimal consumption - "demand." The chapter is organized as follows: Section 2 presents the basic model of an exchange economy with a continuum of agents, together with the definitions of the appropriate concepts. The Value Principle results are stated in Section 3. An informal (and hopefully instructive) proof of the Value Equivalence Theorem is provided in Section 4. Section 5 is devoted to additional material, generalizations, extensions and alternative approaches.