We consider a class of fixed-charge transportation problems over a
graph G. We show that this problem is strongly NP-hard, but can be solved in pseudo-polynomial time using dynamic programming if G is a tree. We also show that the linear programming formulation associated to the dynamic program can be obtained from extended formulations of single-node flow polytopes. Motivated by these results, we present a unary expansion-based formulation for general graphs that is computationally advantageous when compared to a standard formulation, even if its LP relaxation is not stronger. This is joint work with Mathieu Van Vyve.
We introduce a new framework of Airport and Railway Problems, which combines capacitated facility location with network design. In this framework we are given a graph with weights on the vertices and on the edges, together with a parameter k. The vertices of the graph represent cities, and weights denote respectively the costs of opening airports in the cities and building railways that connect pairs of cities. The parameter k can be thought of as the capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting the cities, where each connected component in the network spans at most k vertices, contains an open airport, and the network satisfies some additional requirements specific to the problem in the framework.
We consider two problems in this framework. In the ARF problem there are no additional requirements for the network. This problem is related to capacitated facility location. In the ARP problem, we require each component to be a path with airports at both endpoints. ARP is a relaxation of the capacitated vehicle routing problem CVRP.
We consider the problems in the two-dimensional Euclidean setting. We show that both ARF and ARP are NP-hard, even for uniform vertex weights (i.e. when the cost of building an airport is the same for all cities). On the positive side, we provide polynomial time approximation schemes for ARF and ARP when vertex weights are uniform. We also investigate ARF and ARP for k=∞. In this setting we present an exact polynomial time algorithm for ARF with general vertex costs, which also works for general edge costs. In contrast to ARF, ARP remains NP-hard when k=∞, and we present a polynomial time approximation scheme for general vertex weights.
This is joint work with Anna Adamaszek and Tobias Mömke.
We study combinatorial games arising from removing arcs from a
network, to disconnect two distinguished nodes s and
t. First we study a two-person zero-sum game where one
player, the attacker, chooses every few minutes a set of arcs that
intercepts
every path from s to t, then a second player, the
inspector, chooses every few minutes an arc to inspect trying to
find the attacker.
We give polynomial algorithms to find optimal strategies for both players.
Then we study a cooperative game where a coalitions corresponds to
sets of arcs, and the value of a coalition
S is the maximum number of disjoint st-cuts included in S. This is the number of independent ways that a coalition can use to disconnect s and t. We give polynomial algorithms for testing membership to the core and for computing the nucleolus. And if we have time, we study disruption games, where a coalition wins if it disconnects s and t. We give a polynomial combinatorial algorithm for finding the least-core and for computing the nucleolus of the least-core.
Joint work with Francisco Barahona.
We formulate an affine invariant implementation of the algorithm in
Nesterov (1983). We show that the complexity bound is then
proportional to an affine invariant regularity constant defined with
respect to the Minkowski gauge of the feasible set. We also detail
matching lower bounds when the feasible set is an
ℓp ball. In this setting, our bounds on iteration complexity for the algorithm in Nesterov (1983) are thus optimal in terms of target precision, smoothness and problem dimension.
Based on joint work with Cristóbal Guzmán and Martin Jaggi
In this talk, we study revenue-maximizing mechanism design for
sequencing jobs on a single processor. We consider a model where a
set of agents each has a single job with a cost for waiting (their
weight) and a processing time. One or both of the parameters can be
private to the agent, in which case we assume public knowledge of
priors for that data. We consider the following problem. Given the
priors for the jobs' private data, find a sequencing rule and
payment scheme that are incentive compatible with respect to the
Bayes-Nash equilibrium and minimize the total expected payment made
to the agents.
We give an overview of three cases. In the single-dimensional case,
when only weights are private information, an optimal mechanism
exists that can be described with closed formulae. When the
processing times are also private, but the agents are only allowed
to report processing times larger than their true processing time
(1.5-D case), a linear program of exponential size can be
compactified to polynomial size. For the 2-D case, where the agents
are allowed to report smaller than their true processing time, we
show that the optimal mechanism can again be described with closed
formulae, with the same flavor as the single-dimensional case. This
is joined work with Marc Uetz.
The assignment game (Shapley and Shubik, 1972) is a model for a
two-sided market where there is an exchange of indivisible
goods for money and buyers or sellers demand or supply exactly
one unit of the good. The potential profit obtained by a buyer
and a seller if they trade can be represented in a matrix, the
assignment matrix.
We show that the family of assignment matrices which give rise
to the same nucleolus form a compact join-semilattice with one
maximal element, which is always a valuation (see p.43, Topkis
(1998) ). We give an explicit form of this valuation matrix.
The above family is in general not a convex set, but path-connected,
and we construct minimal elements of this family. We analyze the
conditions to ensure that a given vector is the nucleolus of some
assignment game. We study a specific rule to compute the nucleolus
in some cases. This is a joint work with Carles Rafels and Neus Ybern.
Motivated by the scarcity of accurate payoff feedback in many applications of game theory, we examine a broad class of learning dynamics for games where players adjust their choices based on past observations that are subject to noise and random payoff disturbances. In the unilateral case (where a single player is trying to adapt to an arbitrarily changing environment), we show that the stochastic dynamics under study lead to no regret almost surely, irrespective of the noise level in the player's observations. In the multi-player case, we find that dominated strategies become extinct and we establish several stability and convergence results for strict equilibria — again, valid for random disturbances of arbitrarily high magnitude. Finally, we also provide an averaging principle for 2-player games and we show that the empirical distribution of play converges to Nash equilibrium in zero-sum games for any noise level.
We introduce the dependent splitting game, a zero-sum stochastic game
in which the players jointly control a martingale in some
finite-dimensional
simplex, and consider a general evaluation of the stage payoffs.
The existence of the value is established for any fixed evaluation. We
proceed then to both an asymptotic and a uniform approach. First, we
prove the convergence of the value functions as the weights of each
stage tend to 0 to the unique solution of the Mertens-Zamir system of
equations. Second, we prove the existence of the uniform value for the
infinitely repeated splitting game and exhibit a couple of 0-optimal
stationary strategies for which, in addition, the stage payoff and the
state remains constant after at most one stage. As an application of
our previous results, we establish the convergence of the values for
repeated games with incomplete information on both sides, in the
dependent case, as the weights of each stage tend to zero.
Stackelberg games that arise in applications can lead to large
instances, which makes it important to develop efficient
solution methods. In this work we present a polyhedral study
of different integer programming formulations of Stackelberg
games. We present a new mixed integer linear formulation for
multiple adversary Stackelberg games and compare it to other
formulations in the literature. In addition we adapt these
formulations for Stackelberg games that arise in a security
setting. Our theoretical results compare the strength of the
linear relaxations of these Stackelberg Games formulations.
Our computational results show that while some formulations
give a smaller gap, the additional computational time required
to solve the LP relaxation, make them less efficient in solving
problems of interest in practice.
Joint work with Carlos Casorran, Bernard Fortz and Martine Labbe
In this joint work with Francis Bach, we consider online convex
optimization with noisy zero-th order information, that is noisy
function evaluations at any desired point. We focus on problems with
high degrees of smoothness, such as online logistic regression. We
show that as opposed to gradient-based algorithms, high-order
smoothness may be used to improve estimation rates, with a precise
dependence on the degree of smoothness and the dimension. In
particular, we show that for infinitely differentiable functions, we
recover the same dependence on sample size as gradient-based
algorithms, with an extra dimension-dependent factor. This is done
for convex and strongly-convex functions, with finite horizon and
anytime algorithms.
A set of intervals is independent when the intervals are pairwise
disjoint. In the interval selection problem we are given a set I of intervals and we want to find
an independent subset of intervals of largest cardinality. Let a(I)
denote the cardinality of an optimal solution. We discuss the
estimation of a(I) in the streaming model, where we
only have one-time, sequential access to the input intervals, the
endpoints of the intervals lie in {1, ..., n}, and the amount of the memory is
constrained.
For intervals of different sizes, we provide an algorithm in the data
stream model that computes an estimate a' of a(I)
that, with probability at least 2/3, satisfies (1/2)(1-ε) a(I)
<= a' <= a(I).
For same-length intervals, we provide another algorithm in the data
stream model that computes an estimate a' of a(I)
that, with probability at least 2/3, satisfies (2/3)(1-ε) a(I)
<= a' <= a(I).
The space used by our algorithms is bounded by a polynomial in 1/ε
and log n. We also show that
no better estimations can be achieved using o(n)
bits of storage.
We also develop new, approximate solutions to the interval selection
problem, where we want to report a feasible solution, that use O(a(I))
space. Our algorithms for the interval selection problem match the
optimal results by Emek, Halldórsson and Rosén [Space-Constrained
Interval Selection, ICALP 2012], but are much simpler.
Based on joint work with Sergio Cabello.
Let f : H → ℝ ∪ {+∞} be a proper lower-semicontinuous convex function defined on a Hilbert
space H (think of ℝN), and let
(xk) be a sequence in H generated by
means of a "typical" first-order method intended to minimize
f. If argmin(f) is nonempty then
(xk) will converge weakly to a minimizer of
f, as
k→∞, with a worst-case theoretical convergence rate of
f(xk) - min(f) = O(1/k).
In the 1980's Y. Nesterov came up with a revolutionary – yet
remarkably simple! – idea to modify the computation of the
iterates with essentially the same computational cost, in order to
improve this theoretical convergence rate down to
O(1/k2). For three decades researchers have
unsuccessfully tried to prove the (weak) convergence of the resulting sequence. Despite this drawback Nesterov's scheme is current common practice, especially in image processing and statistics, and systematically converges faster than the unaccelerated counterpart.
In this talk we revisit Nesterov's acceleration scheme by interpreting
it as a time discretization of a second-order differential
equation. It turns out that this analogy brings a deeper insight
into this procedure. In particular, it is possible to prove that an
"infinitesimal" variant of the accelerated method yields sequences
which are not only convergent, but exhibit a convergence rate of
f(xk) - min(f) = o(1/k).
The (weak) convergence of the original method of Nesterov remains a puzzling open question.
We consider a game where n sellers can choose one out of a finite number of locations in order to attract the highest number of customers. The model assumes that customers are arbitrarily distributed and shop at the closest location. If several sellers choose the same location, then they fairly share the customers who shop at that location. We prove that, when the number of sellers is high enough, the game admits a pure equilibrium. In this equilibrium the number of sellers who choose one location is almost proportional to the amount of consumers who shop there. We consider some variations of the game where either the number of sellers is random, or they are of different types. We relate this finite game with the classical Hotelling game where the action set of the sellers is infinite.
The above-described game is not a potential game, but it has some features of a potential game when the number of sellers is large. We examine the relation between potential and location games.
This talk is based on joint papers with Matías Núñez and Gaëtan Fournier.
Motivated by models of congestion in networks and more generally aggregative games, we consider a framework where different participants can interact: populations of non atomic players, atomic players with splittable action sets, atomic players with unsplittable action sets. We define and characterize the corresponding equilibria through variational inequalities. We then introduce and study the associated potential games and evolutionary dynamics.
The maximum cut (Max-Cut) problem consists of finding a
bipartition of a weighted graph that maximizes the total weight
crossing it. It is easy to find a solution for Max-Cut whose weight is
at least half the optimum: a random partition cuts in expectation half
of the total weight of the graph. No approximation better than 1/2 was
known for this problem until Goemans and Williamson devised an
algorithm using semidefinite programming (SDP) to achieve a 0.87856
approximation.
Although SDP is polynomial time solvable it is
computationally expensive so it is still interesting to develop
algorithms that do not use SDP. In 2012 Trevisan proposed an algorithm
based on eigenvector computations achieving a better than 1/2
approximation. In this talk I will present an improved analysis of
this algorithm showing that eigenvector computation can be used to
approximate Max-Cut up to a factor of 0.6142.
Optimization models have been used since long to support operational
planning in several areas. Typically, we model decisions covering
different time horizons: long-term strategic decisions, mid term
tactical and short range operational. However, many times there are
incosistencies between these models. For instance, tactical decisions
might not be optimal for short term operations, due to various sources
of uncertainty. We explore the question of how to achieve consistency
(even partially) and address this problem using various stochastic and
robust optimization models. We show some results on how these
methodologies could increse consistency, with an application to a
planning model in the forest industry. We also provide some
preliminary results for a problem in the management of health systems.
Makespan scheduling on identical machines in one of the most basic and fundamental packing problems studied in the discrete optimization literature. It asks for an assignment of n jobs to a set of m identical machines. The objective is to minimize the maximum load of the machines, that is minimize the maximum sum of processing times of jobs assigned to a machine. The problem is (strongly) NP-hard, and thus we do not expect the existence of an efficient algorithm to solve it exactly. Arguably, the best possible efficient algorithm we can expect guarantees a multiplicative error of (1+ε), for any ε>0. However, unless P=NP, the dependency of the running time cannot be polynomial on 1/ε. Very recently, Chen et al. showed that the dependency on 1/ε has to be exponential if we assume the strong exponential time hypothesis (a somewhat stronger hypothesis than P ≠ NP). A long sequence of algorithms have been developed that try to obtain low dependencies on 1/ε, but they are all super-exponential on 1/ε. In this talk I will discuss an improved algorithm that almost matches the lower bound by Chen et al., essentially closing this long line of research. The new ingredient of our approach is an observation that guarantees the existence of a highly symmetric and sparse almost-optimal solution. This structure can then be exploited by integer programming techniques and enumeration. The same idea helps us obtaining improved algorithms for a number of different related problems.
This is joint work with Klaus Jansen and Kim-Manuel Klein.
Sherali & Adams and Lovász & Schrijver developed systematic procedures to strengthen a relaxation, known as lift-and-project methods.
They have been proven to be a strong tool for developing approximation algorithms, matching the best relaxations known for problems like Max-Cut and Sparsest-Cut.
In this work we provide lower bounds for these hierarchies when applied over the configuration LP for the problem of scheduling identical machines to minimize the makespan. Joint work with A. Kurpisz, M. Mastrolilli, C. Mathieu, T. Mömke and A. Wiese.
We consider games with varying stage duration as defined by Neyman
(2013) and establish some of their asymptotic properties using
the theory of nonexpansive operators in Banach spaces.
In the strip packing problem we are given a set of rectangular items
that we want to place in a strip of given width such that we
minimize the height of the obtained packing. It is a very classical
two-dimensional packing problem that has received a lot of attention
and it has applications in many settings such as stock-cutting and
scheduling. A straight-forward reduction from Partition shows that
the problem cannot be approximated with a better absolute factor
than 3/2. However, this reduction requires the numeric values to be
exponentially large. In this paper, we present a
(1.4+ε)-approximation algorithm with pseudo-polynomial running
time. This implies that for polynomially bounded input data the
problem can be approximated with a strictly better ratio than for
exponential input which is a very rare phenomenon in combinatorial
optimization. Joint work with Giorgi Nadiradze.