Game theory and auctions

We can often model the behavior of web users as selfish agents. This has motivated our research efforts on a game-theoretic study of network phenomena and of auctions. The former are phenomena in which (selfish) users compete against each other for some network resource (e.g., network bandwidth) and we study the properties of various equilibria attained. The latter are motivated by web applications such as computational advertising, where an auction is performed each time that some ads have to be selected for a web page, and we study how to design auctions that maximize either the social welfare or the revenue for the auctioneer, that have some desired properties (e.g., truthfulness, envy freeness, pareto optimality) and under different settings (combinatorial auctions, auctions with or without budgets, etc.).

 

Publications


2016

R. Colini-Baldeschi, B. de Keijzer, S. Leonardi, and S. Turchetta

Approximately Efficient Double Auctions with Strong Budget Balance

Proc. 27th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2016), 2016

Show abstract

Hide abstract

Mechanism design for one-sided markets is an area of extensive research in economics and, since more than a decade, in computer science as well. Two-sided markets, on the other hand, have not received the same attention despite the numerous applications to web advertisement, stock exchange, and frequency spectrum allocation. This work studies double auctions, in which unit-demand buyers and unit-supply sellers act strategically.

An ideal goal in double auction design is to maximize the social welfare of buyers and sellers with individually rational (IR), incentive compatible (IC) and strongly budget-balanced (SBB) mechanisms. The first two properties are standard. SBB requires that the payments charged to the buyers are entirely handed to the sellers. This property is crucial in all the contexts that do not allow the auctioneer retaining a share of buyers' payments or subsidizing the market.

Unfortunately, this goal is known to be unachievable even for the special case of bilateral trade, where there is only one buyer and one seller. Therefore, in subsequent papers, meaningful trade-offs between these requirements have been investigated.

Our main contribution is the first IR, IC and SBB mechanism that provides an O(1)-approximation to the optimal social welfare. This result holds for any number of buyers and sellers with arbitrary, independent distributions. Moreover, our result continues to hold when there is an additional matroid constraint on the sets of buyers who may get allocated an item. To prove our main result, we devise an extension of sequential posted price mechanisms to two-sided markets. In addition to this, we improve the best-known approximation bounds for the bilateral trade problem.


2015

R. Colini-Baldeschi, S. Leonardi, M. Henzinger, and M. Starnberger

On Multiple Keyword Sponsored Search Auctions with Budgets

ACM Transactions on Economics and Computation, Volume 4, Number 1, 2015

Show abstract

Hide abstract

We study multiple keyword sponsored search auctions with budgets. Each keyword has multiple ad slots with a click-through rate. The bidders have additive valuations, which are linear in the click-through rates, and budgets, which are restricting their overall payments. Additionally, the number of slots per keyword assigned to a bidder is bounded.

We show the following results: (1) We give the first mechanism for multiple keywords, where click-through rates differ among slots. Our mechanism is incentive compatible in expectation, individually rational in expectation, and Pareto optimal. (2) We study the combinatorial setting, where each bidder is only interested in a subset of the keywords. We give an incentive compatible, individually rational, Pareto-optimal, and deterministic mechanism for identical click-through rates. (3) We give an impossibility result for incentive compatible, individually rational, Pareto-optimal, and deterministic mechanisms for bidders with diminishing marginal valuations.

D. Ferraioli, and P. Penna

Imperfect Best-Response Mechanisms

Theory of Computing Systems, Volume 57, Number 3, 2015

Show abstract

Hide abstract

Best-response Mechanisms, introduced by Nisan et al. (2011) provide a unifying framework for studying various distributed protocols in which the participants are instructed to repeatedly best respond to each others’ strategies. Two fundamental features of these mechanisms are convergence and incentive compatibility. This work investigates convergence and incentive compatibility conditions of such mechanisms when players are not guaranteed to always best respond but they rather play an imperfect best-response strategy. That is, at every time step every player deviates from the prescribed best-response strategy according to some probability parameter. The results explain to what extent convergence and incentive compatibility depend on the assumption that players never make mistakes, and how robust such protocols are to "noise" or "mistakes".

A. Gupta, J. Könemann, S. Leonardi, R. Ravi, and G. Schäfer

Efficient Cost-Sharing Mechanisms for Prize-Collecting Problems

Mathematical Programming, Volume 152, Number 1, 2015

Show abstract

Hide abstract

We consider the problem of designing efficient mechanisms to share the cost of providing some service to a set of self-interested customers. In this paper, we mainly focus on cost functions that are induced by prize-collecting optimization problems. Such cost functions arise naturally whenever customers can be served in two different ways: either by being part of a common service solution or by being served individually. One of our main contributions is a general lifting technique that allows us to extend the social cost approximation guarantee of a Moulin mechanism for the respective non-prize-collecting problem to its prize-collecting counterpart. Our lifting technique also suggests a generic design template to derive Moulin mechanisms for prize-collecting problems. The approach is particularly suited for cost-sharing methods that are based on primal-dual algorithms. We illustrate the applicability of our approach by deriving Moulin mechanisms for prize-collecting variants of submodular cost-sharing, facility location and Steiner forest problems. All our mechanisms are essentially best possible with respect to budget balance and social cost approximation guarantees. Finally, we show that the Moulin mechanism by Könemann et al. (SIAM J. Comput. 37(5):1319--1341, 2008) for the Steiner forest problem is O(log3k)-approximate. Our approach adds a novel methodological contribution to existing techniques by showing that such a result can be proved by embedding the graph distances into random hierarchically separated trees.

A. Anagnostopoulos, L. Becchetti, B. de Keijzer, and G. Schäfer

Inefficiency of Games with Social Context [pdf]

Theory of Computing Systems, Volume 57, Number 3, 2015

Show abstract

Hide abstract

The study of other-regarding player behavior such as altruism and spite in games has recently received quite some attention in the algorithmic game theory literature. Already for very simple models, it has been shown that altruistic behavior can actually be harmful for society in the sense that the price of anarchy may increase as the players become more altruistic. In this paper, we study the severity of this phenomenon for more realistic settings in which there is a complex underlying social structure, causing the players to direct their altruistic and spiteful behavior in a refined player-specific sense (depending, for example, on friendships that exist among the players). Our findings show that the increase in the price of anarchy is modest for congestion games and minsum scheduling games, whereas it might be drastic for generalized second price auctions.

B. de Keijzer, G. Schäfer, and O. Telelis

The Strong Price of Anarchy of Linear Bottleneck Congestion Games

Theory of Computing Systems, Volume 57, Number 2, 2015

Show abstract

Hide abstract

We study the inefficiency of equilibrium outcomes in Bottleneck Congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We analyze the (strong) Price of Anarchy of these games for a natural load balancing social cost objective, i.e., minimize the maximum latency of a facility. In our studies, we focus on Bottleneck Congestion games with linear latency functions. These games still constitute a rich class of games and generalize, for example, Load Balancing games with identical or uniformly related machines (with or without restricted assignments). We derive upper and asymptotically matching lower bounds on the (strong) Price of Anarchy of these games. We also derive more refined bounds for several special cases of these games, including the cases of identical player weights, identical latency functions and symmetric strategy sets. Further, we provide lower bounds on the Price of Anarchy for k-strong equilibria.

A. Anagnostopoulos, D. Ferraioli, and S. Leonardi

Competitive Influence in Social Networks: Convergence, Submodularity, and Competition Effects [pdf]

Proc. 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), 2015

Show abstract

Hide abstract

In the last 10 years, a vast amount of scientific literature has studied the problem of influence maximization. Yet, only very recently have scientists started considering the more realistic case in which competing entities try to expand their market and maximize their share via viral marketing. Goyal and Kearns [STOC 2012] present a model for the diffusion of two competing alternatives in a social network, which consists of two phases: one for the activation, in which nodes choose whether to adopt any of the two alternatives or none of them, and one for the selection, which is for choosing which of the two alternatives to adopt.

In this work we consider this two-phase model, by composing some of the most known dynamics (threshold, voter, and logit models), and we ask the following questions: (1) How is the stationary distribution of the composition of these dynamics related to those of the single composing dynamics? (2) Does the number of adopters of one of the alternatives increase in a monotone and submodular way with respect to the set of initial adopters of that alternative? (3) To what extent does the competition among alternatives affect the total number of agents adopting one of the alternatives?

J. R. Correa, J. Jong, B. de Keijzer, and M. Uetz

The Curse of Sequentiality in Routing Games

Proc. 11th Conference on Web and Internet Economics (WINE 2015), 2015

Show abstract

Hide abstract

In the "The curse of simultaneity", Paes Leme et al. show that there are interesting classes of games for which sequential decision making and corresponding subgame perfect equilibria avoid worst case Nash equilibria, resulting in substantial improvements for the price of anarchy. This is called the sequential price of anarchy. A handful of papers have lately analysed it for various problems, yet one of the most interesting open problems was to pin down its value for linear atomic routing (also: network congestion) games, where the price of anarchy equals 5/2. The main contribution of this paper is the surprising result that the sequential price of anarchy is unbounded even for linear symmetric routing games, thereby showing that sequentiality can be arbitrarily worse than simultaneity for this class of games. Complementing this result we solve an open problem in the area by establishing that the (regular) price of anarchy for linear symmetric routing games equals 5/2. Additionally, we prove that in these games, even with two players, computing the outcome of a subgame perfect equilibrium is NP-hard.

M. Adamczyk, A. Borodin, D. Ferraioli, B. de Keijzer, and S. Leonardi

Sequential Posted Price Mechanisms with Correlated Valuations

Proc. 11th Conference on Web and Internet Economics (WINE 2015), 2015

Show abstract

Hide abstract

We study the revenue performance of sequential posted price mechanisms and some natural extensions, for a general setting where the valuations of the buyers are drawn from a correlated distribution. Sequential posted price mechanisms are conceptually simple mechanisms that work by proposing a “take-it-or-leave-it” offer to each buyer. We apply sequential posted price mechanisms to single-parameter multi-unit settings in which each buyer demands only one item and the mechanism can assign the service to at most k of the buyers. For standard sequential posted price mechanisms, we prove that with the valuation distribution having finite support, no sequential posted price mechanism can extract a constant fraction of the optimal expected revenue, even with unlimited supply. We extend this result to the case of a continuous valuation distribution when various standard assumptions hold simultaneously. In fact, it turns out that the best fraction of the optimal revenue that is extractable by a sequential posted price mechanism is proportional to the ratio of the highest and lowest possible valuation. We prove that for two simple generalizations of these mechanisms, a better revenue performance can be achieved: if the sequential posted price mechanism has for each buyer the option of either proposing an offer or asking the buyer for its valuation, then an Omega(1/max1,d) fraction of the optimal revenue can be extracted, where d denotes the “degree of dependence” of the valuations, ranging from complete independence (d=0) to arbitrary dependence (d=n−1). When we generalize the sequential posted price mechanisms further, such that the mechanism has the ability to make a take-it-or-leave-it offer to the i-th buyer that depends on the valuations of all buyers except i, we prove that a constant fraction (2−√e)/4 (approximately 0.088) of the optimal revenue can be always extracted.


2014

F. Grandoni, P. Krysta, S. Leonardi, and C. Ventre

Utilitarian Mechanism Design for Multiobjective Optimization

SIAM Journal on Computing, Volume 43, Number 4, 2014

Show abstract

Hide abstract

In a classic optimization problem, the complete input data is assumed to be known to the algorithm. This assumption may not be true anymore in optimization problems motivated by the Internet where part of the input data is private knowledge of independent selfish agents. The goal of algorithmic mechanism design is to provide (in polynomial time) a solution to the optimization problem and a set of incentives for the agents such that disclosing the input data is a dominant strategy for the agents. In the case of NP-hard problems, the solution computed should also be a good approximation of the optimum. In this paper we focus on mechanism design for multiobjective optimization problems. In this setting we are given a main objective function and a set of secondary objectives which are modeled via budget constraints. Multiobjective optimization is a natural setting for mechanism design as many economical choices ask for a compromise between different, partially conflicting goals. The main contribution of this paper is showing that two of the main tools for the design of approximation algorithms for multiobjective optimization problems, namely, approximate Pareto sets and Lagrangian relaxation, can lead to truthful approximation schemes. By exploiting the method of approximate Pareto sets, we devise truthful deterministic and randomized multicriteria fully polynomial-time approximation schemes (FPTASs) for multiobjective optimization problems whose exact version admits a pseudopolynomial-time algorithm, as, for instance, the multibudgeted versions of minimum spanning tree, shortest path, maximum (perfect) matching, and matroid intersection. Our construction also applies to multidimensional knapsack and multiunit combinatorial auctions. Our FPTASs compute a (1+varepsilon)-approximate solution violating each budget constraint by a factor (1+varepsilon). When feasible solutions induce an independence system, i.e., when subsets of feasible solutions are feasible as well, we present a PTAS (not violating any constraint), which combines the approach above with a novel monotone way to guess the heaviest elements in the optimum solution. Finally, we present a universally truthful Las Vegas PTAS for minimum spanning tree with a single budget constraint, where one wants to compute a minimum cost spanning tree whose length is at most a given value L. This result is based on the Lagrangian relaxation method, in combination with our monotone guessing step and with a random perturbation step (ensuring low expected running time). This result can be derandomized in the case of integral lengths. All the mentioned results match the best known approximation ratios, which are, however, obtained by nontruthful algorithms.

P. Chen, B. de Keijzer, D. Kempe, and G. Schäfer

Altruism and Its Impact on the Price of Anarchy

ACM Transactions on Economics and Computation, Volume 2, Number 4, 2014

Show abstract

Hide abstract

We study the inefficiency of equilibria for congestion games when players are (partially) altruistic. We model altruistic behavior by assuming that player i's perceived cost is a convex combination of αi times his direct cost and αi times the social cost. Tuning the parameters αi allows smooth interpolation between purely selfish and purely altruistic behavior. Within this framework, we study primarily altruistic extensions of (atomic and nonatomic) congestion games, but also obtain some results on fair cost-sharing games and valid utility games.

We derive (tight) bounds on the price of anarchy of these games for several solution concepts. Thereto, we suitably adapt the smoothness notion introduced by Roughgarden and show that it captures the essential properties to determine the robust price of anarchy of these games. Our bounds show that for atomic congestion games and cost-sharing games, the robust price of anarchy gets worse with increasing altruism, while for valid utility games, it remains constant and is not affected by altruism.

However, the increase in the price of anarchy is not a universal phenomenon: For general nonatomic congestion games with uniform altruism, the price of anarchy improves with increasing altruism. For atomic and nonatomic symmetric singleton congestion games, we derive bounds on the pure price of anarchy that improve as the average level of altruism increases. (For atomic games, we only derive such bounds when cost functions are linear.) Since the bounds are also strictly lower than the robust price of anarchy, these games exhibit natural examples in which pure Nash equilibria are more efficient than more permissive notions of equilibrium.

B. de Keijzer, T. B. Klos, and Y. Zhang

Finding Optimal Solutions for Voting Game Design Problems

Journal of Artificial Intelligence Research, Volume 50, 2014

Show abstract

Hide abstract

In many circumstances where multiple agents need to make a joint decision, voting is used to aggregate the agents' preferences. Each agent's vote carries a weight, and if the sum of the weights of the agents in favor of some outcome is larger than or equal to a given quota, then this outcome is decided upon. The distribution of weights leads to a certain distribution of power. Several `power indices' have been proposed to measure such power. In the so-called inverse problem, we are given a target distribution of power, and are asked to come up with a game in the form of a quota, plus an assignment of weights to the players whose power distribution is as close as possible to the target distribution (according to some specified distance measure).

Here we study solution approaches for the larger class of voting game design (VGD) problems, one of which is the inverse problem. In the general VGD problem, the goal is to find a voting game (with a given number of players) that optimizes some function over these games. In the inverse problem, for example, we look for a weighted voting game that minimizes the distance between the distribution of power among the players and a given target distribution of power (according to a given distance measure). Our goal is to find algorithms that solve voting game design problems exactly, and we approach this goal by enumerating all games in the class of games of interest.

We first present a doubly exponential algorithm for enumerating the set of simple games. We then improve on this algorithm for the class of weighted voting games and obtain a quadratic exponential (i.e., 2O(n2)) algorithm for enumerating them. We show that this improved algorithm runs in output-polynomial time, making it the fastest possible enumeration algorithm up to a polynomial factor. Finally, we propose an exact anytime-algorithm that runs in exponential time for the power index weighted voting game design problem (the `inverse problem'). We implement this algorithm to find a weighted voting game with a normalized Banzhaf power distribution closest to a target power index, and perform experiments to obtain some insights about the set of weighted voting games. We remark that our algorithm is applicable to optimizing any exponential-time computable function, the distance of the normalized Banzhaf index to a target power index is merely taken as an example.

R. Colini-Baldeschi, S. Leonardi, P. Sankowski, and Q. Zhang

Revenue Maximizing Envy-Free Fixed-Price Auctions with Budgets

Proc. 10th Conference on Web and Internet Economics (WINE 2014), 2014

Show abstract

Hide abstract

Traditional incentive-compatible auctions [6,16] for selling multiple goods to unconstrained and budgeted bidders can discriminate between bidders by selling identical goods at different prices. For this reason, Feldman et al. [7] dropped incentive compatibility and turned the attention to revenue maximizing envy-free item-pricing allocations for budgeted bidders. Envy-free allocations were suggested by classical papers [9,15]. The key property of such allocations is that no one envies the allocation and the price charged to anyone else. In this paper we consider this classical notion of envy-freeness and study fixed-pricemechanisms which use nondiscriminatory uniform prices for all goods. Feldman et al. [7] gave an item-pricing mechanism that obtains 1/2 of the revenue obtained from any envy-free fixed-price mechanism for identical goods. We improve over this result by presenting an FPTAS for the problem that returns an (1−varepsilon)-approximation of the revenue obtained by any envy-free fixed-price mechanism for any varepsilon>0 and runs in polynomial time in the number of bidders n and 1/varepsilon even for exponential supply of goods m. Next, we consider the case of budgeted bidders with matching-type preferences on the set of goods, i.e., the valuation of each bidder for each item is either vi or 0. In this more general case, we prove that it is impossible to approximate the optimum revenue within O(min(n,m)1/2-varepsilon) for any varepsilon>0 unless P=NP. On the positive side, we are able to extend the FPTAS for identical goods to budgeted bidders in the case of constant number of different types of goods. Our FPTAS gives also a constant approximation with respect to the general envy-free auction.

H. Aziz, and B. de Keijzer

Shapley Meets Shapley

Proc. 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014), 2014

Show abstract

Hide abstract

This paper concerns the analysis of the Shapley value in matching games. Matching games constitute a fundamental class of cooperative games which help understand and model auctions and assignments. In a matching game, the value of a coalition of vertices is the weight of the maximum size matching in the subgraph induced by the coalition. The Shapley value is one of the most important solution concepts in cooperative game theory. After establishing some general insights, we show that the Shapley value of matching games can be computed in polynomial time for some special cases: graphs with maximum degree two, and graphs that have a small modular decomposition into cliques or cocliques (complete k-partite graphs are a notable special case of this). The latter result extends to various other well-known classes of graph-based cooperative games. We continue by showing that computing the Shapley value of unweighted matching games is #P-complete in general. Finally, a fully polynomial-time randomized approximation scheme (FPRAS) is presented. This FPRAS can be considered the best positive result conceivable, in view of the #P-completeness result.

D. Ferraioli, L. Gourvès, and J. Monnot

On Regular and Approximately Fair Allocations of Indivisible Goods

Proc. 2014 International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), 2014

Show abstract

Hide abstract

An active stream of research is devoted to the design of polynomial approximation algorithms for the fair allocation of indivisible goods. Central to this field is the MaxMin Allocation problem, for which there is a significant gap between known approximation and inapproximability results. Closing this gap is a stimulating challenge.

To this end, we consider a regular version of MaxMin Allocation where each agent must receive exactly k goods, for a given integer k. We call this problem k-division. The analysis of this problem allows us to highlight two interesting features of the classical MaxMin Allocation problem. First, we show a close connection of the problem with matroid theory. This connection provides us an exact algorithm for a special case of k-division and a 1/k-approximation algorithm for general inputs. Moreover, we show that the difficulty of the MaxMin Allocation may be caught by an apparently simpler problem, namely the k-division problem in which an agent's utility for a single good can only take one value out of three.


2013

A. Anagnostopoulos, L. Becchetti, B. de Keijzer, and G. Schäfer

Inefficiency of Games with Social Context [pdf]

Proc. 6th International Symposium on Algorithmic Game Theory (SAGT 2013), 2013

Show abstract

Hide abstract

The study of other-regarding player behavior such as altruism and spite in games has recently received quite some attention in the algorithmic game theory literature. Already for very simple models, it has been shown that altruistic behavior can actually be harmful for society in the sense that the price of anarchy may increase as the players become more altruistic. In this paper, we study the severity of this phenomenon for more realistic settings in which there is a complex underlying social structure, causing the players to direct their altruistic and spiteful behavior in a refined player-specific sense (depending, for example, on friendships that exist among the players). Our findings show that the increase in the price of anarchy is modest for congestion games and minsum scheduling games, whereas it is drastic for generalized second price auctions.

S. Bhattacharya, E. Koutsoupias, J. Kulkarni, S. Leonardi, T. Roughgarden, and X. Xu

Near-Optimal Multi-Unit Auctions with Ordered Bidders

Proc. 14th ACM Conference on Electronic Commerce (EC 2013), 2013

Show abstract

Hide abstract

We construct prior-free auctions with constant-factor approximation guarantees with ordered bidders, in both unlimited and limited supply settings. We compare the expected revenue of our auctions on a bid vector to the monotone price benchmark, the maximum revenue that can be obtained from a bid vector using supply-respecting prices that are nonincreasing in the bidder ordering and bounded above by the second-highest bid. As a consequence, our auctions are simultaneously near-optimal in a wide range of Bayesian multi-unit environments.


2012

A. Kesselman, and S. Leonardi

Game-Theoretic Analysis of Internet Switching with Selfish Users

Theoretical Computer Science, Volume 452, 2012

Show abstract

Hide abstract

We consider the problem of Internet switching, where traffic is generated by selfish users. We study a packetized (TCP-like) traffic model, which is more accurate than the widely used fluid model. We assume that routers have First-In-First-Out (FIFO) buffers of bounded capacity managed by the drop-tail policy. The utility of each user depends on its goodput and the congestion level. Since selfish users try to maximize their own utility disregarding the system objectives, we study Nash equilibria that correspond to a steady state of the system. We quantify the degradation in the network performance called the price of anarchy resulting from such selfish behavior. We show that for a single bottleneck buffer, the price of anarchy is proportional to the number of users. Then we propose a simple modification of the Random Early Detection (RED) drop policy, which reduces the price of anarchy to a constant. We demonstrate that a Nash equilibrium can be reached if all users deploy TCP Vegas as their transport protocol under the drop-tail policy. We also consider some natural extensions of our model including the case of multiple Quality of Service (QoS) requirements, routing on parallel links and general networks with multiple bottlenecks.

S. Leonardi, and T. Roughgarden

Prior-Free Auctions with Ordered Bidders

Proc. 44th ACM Symposium on Theory of Computing (STOC 2012), 2012

Show abstract

Hide abstract

Prior-free auctions are robust auctions that assume no distribution over bidders' valuations and provide worst-case (input-by-input) approximation guarantees. In contrast to previous work on this topic, we pursue good prior-free auctions with non-identical bidders.

Prior-free auctions can approximate meaningful benchmarks for non-identical bidders only when "sufficient qualitative information" about the bidder asymmetry is publicly known. We consider digital goods auctions where there is a total ordering of the bidders that is known to the seller, where earlier bidders are in some sense thought to have higher valuations. We use the framework of Hartline and Roughgarden (STOC '08) to define an appropriate revenue benchmark: the maximum revenue that can be obtained from a bid vector using prices that are nonincreasing in the bidder ordering and bounded above by the second-highest bid. This monotone-price benchmark is always as large as the well-known fixed-price benchmark F(2), so designing prior-free auctions with good approximation guarantees is only harder. By design, an auction that approximates the monotone-price benchmark satisfies a very strong guarantee: it is, in particular, simultaneously near-optimal for essentially every Bayesian environment in which bidders' valuation distributions have nonincreasing monopoly prices, or in which the distribution of each bidder stochastically dominates that of the next. Of course, even if there is no distribution over bidders' valuations, such an auction still provides a quantifiable input-by-input performance guarantee.

In this paper, we design a simple prior-free auction for digital goods with ordered bidders, the Random Price Restriction (RPR) auction. We prove that its expected revenue on every bid profile b is Ω(M(b)/log*n), where M denotes the monotone-price benchmark and log*n denotes the number of times that the log2 operator can be applied to n before the result drops below a fixed constant.

M. Feldman, A. Fiat, S. Leonardi, and P. Sankowski

Revenue Maximizing Envy-Free Multi-Unit Auctions with Budgets

Proc. 13th ACM Conference on Electronic Commerce (EC 2012), 2012

Show abstract

Hide abstract

We study envy-free (EF) mechanisms for multi-unit auctions with budgeted agents that approximately maximize revenue. In an EF auction, prices are set so that every bidder receives a bundle that maximizes her utility amongst all bundles; We show that the problem of revenue-maximizing EF auctions is NP-hard, even for the case of identical items and additive valuations (up to the budget). The main result of our paper is a novel EF auction that runs in polynomial time and provides a approximation of 1/2 with respect to the revenue-maximizing EF auction. A slight variant of our mechanism will produce an allocation and pricing that is more restrictive (so called item pricing) and gives a 1/2 approximation to the optimal revenue within this more restrictive class.

R. Colini-Baldeschi, M. Henzinger, S. Leonardi, and M. Starnberger

On Multiple Keyword Sponsored Search Auctions with Budgets

Proc. 39th International Colloquium on Automata, Languages and Programming (ICALP 2012), 2012

Show abstract

Hide abstract

We study multiple keyword sponsored search auctions with budgets. Each keyword has multiple ad slots with a click-through rate. The bidders have additive valuations, which are linear in the click-through rates, and budgets, which are restricting their overall payments. Additionally, the number of slots per keyword assigned to a bidder is bounded.

We show the following results: (1) We give the first mechanism for multiple keywords, where click-through rates differ among slots. Our mechanism is incentive compatible in expectation, individually rational in expectation, and Pareto optimal. (2) We study the combinatorial setting, where each bidder is only interested in a subset of the keywords. We give an incentive compatible, individually rational, Pareto optimal, and deterministic mechanism for identical click-through rates. (3) We give an impossibility result for incentive compatible, individually rational, Pareto optimal, and deterministic mechanisms for bidders with diminishing marginal valuations.


2011

A. Fiat, S. Leonardi, J. Saia, and P. Sankowski

Single Valued Combinatorial Auctions with Budgets

Proc. 12th ACM Conference on Electronic Commerce (EC 2011), 2011

Show abstract

Hide abstract

We consider budget constrained combinatorial auctions where each bidder has a private value for each of the items in some subset of the items and an overall budget constraint. Such auctions capture adword auctions, where advertisers offer a bid for those adwords that (hopefully) target their intended audience, and advertisers also have budgets. It is known that even if all items are identical and all budgets are public it is nots possible to be truthful and efficient. Our main result is a novel auction that runs in polynomial time, is incentive compatible, and ensures Pareto-optimality. The auction is incentive compatible with respect to the private valuations whereas the budgets and the sets of interest are assumed to be public knowledge. This extends the result of Dobzinski, Lavi and Nisan (FOCS 2008) for auctions of multiple identical items with bugets to single-valued combinatorial auctions and address one of the basic challenges on auctioning web ads (see Nisan et al, 2009, Google auctions for tv ads).


2010

F. Grandoni, P. Krysta, S. Leonardi, and C. Ventre

Utilitarian Mechanism Design for Multi-Objective Optimization

Proc. 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2010), 2010

Show abstract

Hide abstract

In a classic optimization problem the complete input data is known to the algorithm. This assumption may not be true anymore in optimization problems motivated by the Internet where part of the input data is private knowledge of independent selfish agents. The goal of algorithmic mechanism design is to provide (in polynomial time) a solution to the optimization problem and a set of incentives for the agents such that disclosing the input data is a dominant strategy for the agents. In case of NP-hard problems, the solution computed should also be a good approximation of the optimum.

In this paper we focus on mechanism design for multi-objective optimization problems, where we are given the main objective function, and a set of secondary objectives which are modeled via budget constraints. Multi-objective optimization is a natural setting for mechanism design as many economical choices ask for a compromise between different, partially conflicting, goals. Our main contribution is showing that two of the main tools for the design of approximation algorithms for multi-objective optimization problems, namely approximate Pareto curves and Lagrangian relaxation, can lead to truthful approximation schemes.

By exploiting the method of approximate Pareto curves, we devise truthful FPTASs for multi-objective optimization problems whose exact version admits a pseudo-polynomial-time algorithm, as for instance the multi-budgeted versions of minimum spanning tree, shortest path, maximum (perfect) matching, and matroid intersection. Our technique applies also to multi-dimensional knapsack and multi-unit combinatorial auctions. Our FPTASs compute a (1+ε)-approximate solution violating each budget constraint by a factor (1+ε). For a relevant sub-class of the mentioned problems we also present a PTAS (not violating any constraint), which combines the approach above with a novel monotone way to guess the heaviest elements in the optimum solution.

Finally we present a universally truthful Las Vegas PTAS for minimum spanning tree with a single budget constraint. This result is based on the Lagrangian relaxation method, in combination with our monotone guessing step and a random perturbation step (ensuring low expected running time in a way similar to the smoothed analysis of algorithms). All the mentioned results match the best known approximation ratios, which however are obtained by non-truthful algorithms.


2008

J. Könemann, S. Leonardi, G. Schäfer, and S. Zwam

A Group-Strategyproof Cost Sharing Mechanism for the Steiner Forest Game

SIAM Journal on Computing, Volume 37, Number 5, 2008

Show abstract

Hide abstract

We consider a game-theoretical variant of the Steiner forest problem in which each player j, out of a set of k players, strives to connect his terminal pair (sj,tj) of vertices in an undirected, edge-weighted graph G. In this paper we show that a natural adaptation of the primal- dual Steiner forest algorithm of Agrawal, Klein, and Ravi [SIAM Journal on Computing, 24(3):445-456, 1995] yields a 2-budget balanced and cross-monotonic cost sharing method for this game.

We also present a negative result, arguing that no cross-monotonic cost sharing method can achieve a budget balance factor of less than 2 for the Steiner tree game. This shows that our result is tight.

Our algorithm gives rise to a new linear programming relaxation for the Steiner forest problem which we term the lifted-cut relaxation. We show that this new relaxation is stronger than the standard undirected cut relaxation for the Steiner forest problem.


2007

S. Leonardi, and P. Sankowski

Network Formation Games with Local Coalitions

Proc. 26th Annual ACM Symposium on Principles of Distributed Computing (PODC 2007), 2007

Show abstract

Hide abstract

The quality of Nash equilibria in network formations games has recently been analyzed in the case of uncoordinated players. In this paper we study how the price of anarchy of network formation games with Shapley cost allocation is affected by allowing locally coordinated coalitions of players. In a distributed setting not all users can communicate and form coalitions, at least they have to know that the others exist. Here, we assume that the users can form a coalition when they share a resource, i.e., in our case a group of users that share an edge can form a coalition. We show that this assumption is strong enough to decrease the price of anarchy from Θ(k) to Θ(logk) in the one terminal undirected case, where every vertex node is associated with a player and k is the number of players. Whereas in the directed or multi terminal case local communication does not necessary lead to a better price of anarchy. We additionally show that in the directed case the price of stability increases from Θ(logk) to Θ(k).