Lecture 10: The Revelation Principle; Screening ECON4411/8011 – Microeconomic Theory Ronald Stauber Research School of Economics Australian National University 1/25 References MWG (23.B) Bo¨rgers (2) Mailath (10) 2/25 The mechanism design problem How should a collective decision be made, if 1. the chosen outcome a ects a number of agents, and 2. the agents have private information regarding their preferences over alternative outcome choices Questions: Can the agents’ individual preferences be aggregated into a social ranking of alternatives (e.g., through a voting procedure) (Social choice theory) Can the agents be induced to reveal their private information regarding preferences (Mechanism design) Applications: the choice of a public project, provision of a public good, voting, auctions, exchange economies, contracts, . . . We assume the existence of a social planner/mechanism designer, who has the authority to establish an institution/mechanism through which collective decisions are made, and aims to attain various objectives (e.g., ecient allocation, revenue maximisation). 3/25 A general model A collective choice problem is a tuple hN,Z, (Ti)i2N , , (ui)i2N i, such that (i) N = {1, . . . , n} is a set of agents, (ii) Z is a set of possible outcome choices, (iii) for each agent i 2 N , Ti is a set of types, (iv) the Cartesian product T = i2NTi defines a state space with associated signal functions i : T ! Ti, so that for every i 2 N and t = (t1, . . . , ti, . . . , tn) 2 T , i(t) = ti, (v) each agent i 2 N knows his own type ti 2 Ti, and has beliefs over the types Ti of the other agents that are derived from a common prior 2 T , such that (ti|ti) = (ti, ti)P t0i2Ti (ti, t 0i) (the common prior is assumed for convenience, and is not required for many of the results we derive), (vi) for each i 2 N , ui : Z T ! R is a Bernoulli payo function (allowing for common values). 4/25 Decision rules Any institution on the basis of which a collective choice from Z is made, maps states t 2 T into outcomes z 2 Z. Definition (i) A decision rule (or social choice function) is a function d : T ! Z that assigns an outcome in Z to every state of the world in T . Hence, the set of decision rules is ZT . (ii) A decision rule d : T ! Z is ex post ecient if there is no state t 2 T for which there exists a z 2 Z, such that ui(z, t) ui(d(t), t) for all i 2 N , and uj(z, t) > uj(d(t), t) for some j 2 N . Now consider a social planner who knows (but not the true state t), has a well-defined objective (e.g., ex post eciency), and would like to choose a decision rule in ZT that attains this objective: Can the social planner implement an optimal (according to his objective) decision rule Not unless he can induce the agents to reveal their private information/types! 5/25 Mechanisms Can the social planner design a institution/game/mechanism that achieves an optimal decision rule in equilibrium Definition A mechanism is a pair M = h(Mi)i2N , i, such that (i) for each i 2 N , Mi is a set of messages/actions, and (ii) :M ! Z is an outcome function that assigns an outcome in Z to every profile of messages m 2M := i2NMi. Combined with the given collective choice problem, a mechanism M defines a Bayesian game. A (pure strategy) Bayesian Nash equilibrium (BNE) of M = h(Mi)i2N , i is a BNE of the corresponding Bayesian game, i.e., a profile s 2 i2NMTii of strategies si : Ti !Mi, such that for every i and ti, si(ti) 2 arg max mi2Mi 8<: X ti2Ti (ti|ti)ui( (mi, si(ti)), ti, ti) 9=; . Example: (first-price) auction 6/25 Implementation Definition A mechanismM = h(Mi)i2N , i implements a decision rule d : T ! Z if there exists a BNE s 2 i2NMTii of M , such that s = d (i.e., (s1(t1), . . . , sn(tn)) = [ s](t) = d(t) for every t 2 T ). Remarks: Every mechanism M = h(Mi)i2N , i and corresponding BNE s0 implements some decision rule d0 := s0. The question of implementability asks whether for some given decision rule d, there exists a mechanism M with BNE s for which s = d. If a mechanism M implements a decision rule d, the mechanism may have other equilibria s0 for which s0 6= d. Not requiring that the decision rule is implemented by a unique equilibrium of M implicitly assumes that the agents will play the social planner’s preferred equilibrium strategies. If s = d for every BNE of M , we say that M strongly (or fully) implements d (see O&R, ch. 10). 7/25 Direct revelation mechanisms Definition (i) A direct revelation mechanism (DRM) is a mechanism for which Mi = Ti for all i 2 N . We denote the outcome function of a DRM by . (ii) A DRM D = h(Ti)i2N , i implements a decision rule d if there exists a BNE s of D such that s = d. (iii) A DRM D truthfully implements a decision rule d if there exists a BNE s for which si(ti) = ti, i.e., truth telling is an equilibrium strategy for each agent, and s = d. (Note that if such a BNE exists for the DRM D , since the corresponding strategies si are identity functions, it must be the case that = d.) Example: first-price vs. second-price auction 8/25 Proposition (10.1) A DRM D = h(Ti)i2N , i truthfully implements a decision rule d [i ] truth telling is a BNE of the DRM h(Ti)i2N , di, i.e., if for all i and ti, E [ui(d(ti, ti), ti, ti)| ti] E ui(d(t i, ti), ti, ti) ti for all t i 2 Ti, where E ui(d(t i, ti), ti, ti) ti = X ti2Ti (ti|ti)ui(d(t i, ti), ti, ti). The inequality above is referred to as an incentive compatibility constraint, since it ensures that no player has an incentive to misrepresent his type, as long as the other players tell the truth. 9/25 The (Bayesian) revelation principle Proposition (10.2 – Revelation Principle) A decision rule d : T ! Z can be implemented by some mechanism M = h(Mi)i2N , i [i ] it can be truthfully implemented using the DRM h(Ti)i2N , di. Proof. [(] Set Mi = Ti and = d. [)] Let M = h(Mi)i2N , i be a mechanism that implements d using an equilibrium strategy profile s. Then s = d. Consider the DRM D = h(Ti)i2N , i defined by := s. We can interpret D as follows: Instead of reporting a message mi, type ti reports a type t0i, the social planner computes si(t0i) for each i and report t0i, and chooses the outcome (s1(t01), . . . , sn(t0n)). If an agent has an incentive to misrepresent his type in D , given that the other agents are truthful, then s cannot be a BNE of M . (Make sure that you can write down the associated equilibrium conditions in terms of expected payo s!) Since = s = d, this implies that h(Ti)i2N , di truthfully implements d. 10/25 The (Bayesian) revelation principle Proposition (10.2 – Revelation Principle) A decision rule d : T ! Z can be implemented by some mechanism M = h(Mi)i2N , i [i ] it can be truthfully implemented using the DRM h(Ti)i2N , di. Proof. [(] Set Mi = Ti and = d. [)] Let M = h(Mi)i2N , i be a mechanism that implements d using an equilibrium strategy profile s. Then s = d. Consider the DRM D = h(Ti)i2N , i defined by := s. We can interpret D as follows: Instead of reporting a message mi, type ti reports a type t0i, the social planner computes si(t0i) for each i and report t0i, and chooses the outcome (s1(t01), . . . , sn(t0n)). If an agent has an incentive to misrepresent his type in D , given that the other agents are truthful, then s cannot be a BNE of M . (Make sure that you can write down the associated equilibrium conditions in terms of expected payo s!) Since = s = d, this implies that h(Ti)i2N , di truthfully implements d. 10/25 T d Z S spy I Implications of the revelation principle In order to identify all decision rules that can be implemented in a collective choice setting (through any arbitrary mechanism), we only need to find all the decision rules d that are truthfully implementable by the DRM h(Ti)i2N , di. A mechanism designer who aims to implement a decision rule that is optimal according to some criterion, but is constrained by the fact that the agents possess private information, only needs to “search” over the set of truthfully implementable DRMs to identify the best (according to his objectives) decision rule that is attainable in the given social choice setting. The revelation principle makes it easy to identify implementable or optimal decision rules, although the corresponding DRMs may be complicated and not very realistic. However, after identifying an optimal decision rule using the revelation principle, we can look for simple non-direct mechanisms that implement it. 11/25 Screening/Nonlinear pricing = mechanism design problem with one agent agent a buyer of an infinitely divisible good that can be consumed at any non-negative quantity/quality q 0 mechanism designer the seller of the good—a monopolist with a linear production cost cq, where c > 0 The buyer has vNM preferences over quantity consumed q 0 and total payment p that can be represented by a Bernoulli payo function u (q, p) = v(q) p, where (i) is the buyer’s type, which is known to the buyer but not to the seller, (ii) v : R+ ! R+ is a di erentiable function that satisfies v(0) = 0, v0(q) > 0 and v00(q) < 0 for all q 0. Hence, the buyer’s preferences are quasilinear, in the sense that they are separable in consumption and money, and linear/risk neutral with respect to money. 12/25 The seller does not know the buyer’s valuation for the good, and wants to implement a selling mechanism that potentially allows him to price discriminate, i.e., to charge a higher price to a higher valuation buyer. Furthermore, the seller (i) can commit to implementing a chosen selling mechanism, (ii) believes that the buyer’s type is drawn from a support [ , ] R+ according to a distribution function F with density f that satisfies f( ) > 0 for all 2 (one-dimensional type space!), and (iii) has an objective of maximising expected profits (the seller is risk neutral). To guarantee that the model has a non-trivial solution, we assume that v0(0) > c and limq!1 v0(q) < c. 13/25 To find an optimal selling mechanism, we apply the revelation principle and only search over the set of truthful DRMs. A decision rule in this setting maps a type 2 to an outcome consisting of a pair of quantity consumed q( ) 2 R+ and an associated payment p( ) 2 R. Hence, a DRM is defined by a pair of functions hq, pi 2 R + R . Incentive compatibility (IC) of a mechanism hq, pi is equivalent to v(q( )) p( ) v(q( 0)) p( 0) for all , 0 2 . To capture the requirement that purchasing is voluntary for a buyer, we will also impose the following participation/individual rationality (IR) constraint: v(q( )) p( ) 0 for all 2 . 14/25 Characterising the optimal mechanism The seller’s problem of finding an optimal selling mechanism hq, pi can then be summarised as max hq,pi2R + R Z [p( ) cq( )]f( )d , subject to [IC] v(q( )) p( ) v(q( 0)) p( 0) for all , 0 2 , [IR] v(q( )) p( ) 0 for all 2 . Lemma (10.3) If hq, pi satisfies IC, then q is non-decreasing in . Proof. For any > 0, IC implies that v(q( )) p( ) v(q( 0)) p( 0), 0v(q( )) p( ) 0v(q( 0)) p( 0) ) v(q( ))( 0) v(q( 0))( 0) , v(q( )) v(q( 0)) , q( ) q( 0). 15/25 Lemma (10.4) Suppose hq, pi satisfies IC, and define U : ! R by U( ) := v(q( )) p( ). Then U is non-decreasing and convex, and U( ) = U( ) + Z v(q(x))dx. Proof. IC implies that U( ) = max 02 { v(q( 0)) p( 0)}, and therefore, since v(q( 0)) p( 0) is a non-decreasing and ane (hence convex) function of for every fixed 0, that U must be non-decreasing and convex, as a pointwise maximum of a collection of non-decreasing and convex functions. 16/25 Following a standard property of convex functions, U must be di erentiable except for at most countably many points, and applying the Envelope Theorem, IC implies that for any at which U is di erentiable, U 0( ) = v(q( )). As a convex function, U is “absolutely continuous”, and thus a version of the Fundamental Theorem of Calculus yields U( ) = U( ) + Z v(q(x))dx. (See Mailath for additional details and references.) Lemma 10.4 implies that in an incentive compatible mechanism, the utilities of all types > are determined by q and U( ), the utility assigned to the lowest type . Hence, any two mechanisms that have the same q and U( ) are “payo -equivalent,” in the sense that they must yield the same utility for every type . 17/25 Lemma (10.5 – Revenue Equivalence) If hq, pi satisfies IC, then p( ) = p( ) + [ v(q( )) v(q( ))] Z v(q(x))dx. Proof. The result follows by substituting U( ) = v(q( )) p( ) into the expression for U( ) from Lemma 10.4, and solving for p( ). Lemma 10.5 extends the payo -equivalence arising from Lemma 10.4, by showing that in an incentive compatible mechanism, the payments made by types > are pinned down by q and p( ). As a consequence, all mechanisms having the same q and p( ) yield identical payments for all types , and therefore the same expected revenues (and profits) to the seller. 18/25 Proposition (10.6) A mechanism hq, pi satisfies IC [i ] (a) q is non-decreasing, and (b) for all 2 , p( ) = p( ) + [ v(q( )) v(q( ))] Z v(q(x))dx. Proof. [(] For all , 0 2 , U( ) v(q( 0)) p( 0) , U( ) v(q( 0)) 0v(q( 0)) + 0v(q( 0)) p( 0) , U( ) v(q( 0)) 0v(q( 0)) + U( 0) , U( ) U( 0) v(q( 0)) 0v(q( 0)) , Z 0 v(q(x))dx Z 0 v(q( 0))dx. The last inequality always holds whenever q is non-decreasing, since v0 > 0. 19/25 Lemma (10.7) An incentive compatible mechanism hq, pi satisfies IR [i ] U( ) 0. Proof. If hq, pi is incentive compatible, then U is non-decreasing by Lemma 10.4, and therefore U( ) 0 for all 2 [i ] U( ) 0. The lemma implies that if the seller wants to maximise expected profits, he should choose U( ) = v(q( )) p( ) = 0 , p( ) = v(q( )). Substituting this expression for p( ) into the formula for p( ) from Proposition 10.6, yields p( ) = v(q( )) Z v(q(x))dx. 20/25 The sellers problem then reduces to finding a non-decreasing function q : ! R+ that maximisesZ ” v(q( )) Z v(q(x))dx cq( ) # f( )d = Z [ v(q( )) cq( )]f( )d Z “Z v(q(x))dx # f( )d . By changing the order of integration of the two variables x and , the double integral constituting the last expression can be rewritten asZ “Z v(q(x))dx # f( )d = Z “Z x v(q(x))f( )d # dx = Z v(q(x)) “Z x f( )d # dx = Z v(q(x))(1 F (x))dx = Z v(q( )) 1 F ( ) f( ) f( )d . 21/25 Hence, the seller must choose a non-decreasing q : ! R+ to maximiseZ v(q( )) 1 F ( ) f( ) ◆ cq( ) f( )d . Remarks: The term inside the square brackets is often referred to as the virtual surplus derived from type (as opposed to the actual surplus from this type, v(q( )) cq( )). The term v(q( )) 1F ( )f( ) can be interpreted as an information rent (of the buyer), which captures the reduction in surplus due to the presence of asymmetric information and the fact that the seller must o er an incentive to the buyer to reveal his type truthfully. Our strategy for solving the seller’s problem is to first determine the value of q( ) that maximises the virtual surplus for each type , and then confirming/ensuring that the resulting function q is non-decreasing. 22/25 Define ( ) := 1 F ( ) f( ) . Then the first-order necessary condition for a maximiser q of the virtual surplus is v0(q) ( ) = c. (F) We then have the following possible cases: (1) If ( ) 0, (F) has no solution in q, as the “marginal virtual benefit” is always less than the marginal cost, and therefore the optimal q satisfies q( ) = 0. (2) If ( ) > 0 and v0(0) ( ) c, the optimal q is again q( ) = 0, since v0 is decreasing, so the marginal virtual benefit can never equal the marginal cost for any positive q. (3) If ( ) > 0 and v0(0) ( ) > c, there exists a unique solution q( ) > 0 to (F), since we assumed that limq!1 v0(q) < c. 23/25 To ensure that the resulting q( ) is non-decreasing in we assume that ( ) is non-decreasing in , in which case we say that the associated distribution function F is regular. Since v00 < 0, this implies that the solution to (F) must be non-decreasing in . Note that a sucient condition for a distribution function F to be regular, is that it has a non-decreasing “hazard rate” f( )1F ( ) . Proposition (10.8) Suppose that F is regular. Then the optimal mechanism hq, pi is given as follows: (a) If v0(0) ( ) c, then q( ) = 0, (b) if v0(0) ( ) > c, then q( ) is the unique solution to v0(q( )) ( ) = c, and (c) the optimal payment function p is defined by p( ) = v(q( )) Z v(q(x))dx. 24/25 Remarks: If = , then 1 F ( ) = 0, and q( ) solves v0(q( )) = c. Hence, q( ) maximises the actual surplus, and thus type consumes the “first-best” quantity for this type. “no distortion at the top” For < , q( ) is lower than the surplus-maximising quantity, and therefore the quantity supplied to such types is distorted downwards. In the optimal and incentive compatible mechanism, each type chooses a quantity-price pair from (q( 0), p( 0)) 02 —incentive compatibility requires that cannot benefit from choosing a pair (q( 0), p( 0)) targeted to a type 0 6= . If q is invertible, it is easy to see that the seller can implement such a mechanism by posting a (possibly nonlinear) pricing schedule P : q( )! R, where P (Q) := p(q1(Q)), and letting the buyer choose a quantity Q to purchase. This fact is usually referred to as the taxation principle. (The assignment considers the general case where q may not necessarily be invertible. See also the numerical Example 2.1 in Bo¨rgers, pp. 25–26.) 25/25