Game theory provides a natural framework to model situations where decisions are decentralized and the actions of each agent influence the well being of others. While this is a prominent feature in many real-world situations, the intrinsic complexity of computing equilibria has biased the analysis towards centralized decision making, emphasizing combinatorial optimization and operations research methods. For instance, the movement of people and cars within a city, or the flow of packets in communication systems have been addressed as optimal transportation problems, the assignment of jobs in a factory as a scheduling problem, and the relation between producers and consumers as a problem of logistics of a distribution system. In recent years, a renewed interest has come from the need to deal with conflicting situations such as the acute congestion phenomena observed in modern urban transportation and communication networks that occur when decisions are decentralized and multiple agents compete for the scarce available resources. The focus is thus shifted from a centralized perspective to models that naturally fit the framework of game theory: the transportation problem becomes one of network equilibrium, the scheduling problem turns into a mechanism design issue in which agents submit tasks which are processed according to certain rules, the logistic problem one of competition among strategic producers and consumers.
Although game theory is a mature discipline, there are few tools for the actual computation of equilibria, its complexity analysis, and the characterization of properties such as efficiency and fairness. On the other hand, player rationality, perfect information about histories, and complete information about other player’s payoffs are questionable assumptions in situations with a large number of agents or strategies. In such contexts, dynamic models for the adaptive behavior of agents can be more adequate to describe the evolution of a system towards equilibrium. These dynamics may provide in turn inspiration for new algorithms to compute equilibria. The recent books [219], [175], and [197] reflect the vitality of the area and the variety of questions that are currently investigated.
In this proposal we focus on some specific algorithmic and modeling issues that hold the potential for
collaborative research. These questions are related to the dynamics of network flows, the optimality of scheduling
coordination mechanisms, the effects of market power in producer-consumer interactions, and the assessment of
the approximate efficiency and fairness of equilibria. More precisely, the questions we plan to address
include:
We consider settings where the classical techniques [164] cannot be applied or do not yield analytical solutions. The most prominent cases are: multidimensional private information, dynamic settings where the designer cannot commit to future mechanisms, auctions where players face budget constraints, and cases when there is no transferrable utility (money) such as matching procedures for joint location of couples in the medical sector. The goal is to characterize the solutions and to find efficient algorithms to compute them.
Traffic flow in congested networks is often described as a steady state equilibrium that emerges from the adaptive behavior of drivers who strive to minimize travel time while competing for limited road capacity. Wardrop [212] proposed a non-atomic equilibrium model using continuous variables to represent aggregate flows, while Rosenthalstudied the atomic case in which drivers are considered as individual players. Traffic has also been described using discrete choice models for route selection leading to the notion of stochastic user equilibrium. Since route-based models become computationally intractable for large networks, methods that avoid path enumeration were developed. These ideas evolved into the concept of Markovian equilibriumthat looks at route choice as a stochastic dynamic programming process: drivers proceed towards their destination by a sequential process of arc selection using a discrete choice model at every intermediate node in their trip. Route is not fixed before the trip but it is the outcome of this sequential process, so that movements are governed by a Markov chain and network flows are the corresponding invariant measures.
All these equilibrium models assume the existence of an underlying mechanism in travel behavior that converges to a steady state, though they are stated directly as equilibrium equations not tied to a specific adaptive dynamics. Empirical evidence of adaptive behavior, based on laboratory experiments and simulations, has been reported, though it was observed that the steady states differ from the standard notions of equilibria and are sensible to the initial conditions. From a theoretical standpoint, a class of finite-lag adaptive procedures have been studied.On the other hand, several continuous dynamics describing plausible adaptive mechanisms that converge to Wardrop equilibrium have been studied, though these models describe the evolution at an aggregate population level and are not derived from individual player behavior.
Learning and adaptation is a fundamental issue in the study of repeated games (see [102, 219]). The most prominent adaptive procedure is fictitious play, which assumes that at each stage players choose a best reply to the observed empirical distribution of past moves by their opponents [51, 182]. A variant called smooth fictitious play for games with perturbed payoffs and reminiscent of Logit random choice is discussed in [102, 124]. The assumption that players are able to record the past moves of their opponents is very stringent for games involving many players with limited observation capacity. A milder assumption is that players observe only the outcome vector, namely, the payoff obtained at every stage and the payoff that would have resulted if a different move had been played. Procedures such as no-regret [118, 120], exponential weight [97], and calibration [101], deal with such limited information contexts assuming that players adjust their behavior based on statistics of past performance. Eventually, adaptation leads to configurations where no player regrets the choices he makes. These procedures, initially conceived for the case when the complete outcome vector is available, were adapted to the case where only the realized payoff is known [97, 102, 23]: observed payoffs are used to build an unbiased estimator of the outcome vector to which the initial procedure is applied. Although these procedures are flexible and robust, the underlying rationality may still be considered as too demanding for games with a large number of strategies and poorly informed players. This is the case for traffic where a multitude of small players make routing decisions with little information about the strategies of other drivers nor the actual congestion in the network. A simpler adaptive rule that relies only on the sequence of realized moves and payoffs is reinforcement [19, 86]: players select moves proportionally to a propensity vector built by a cumulative procedure in which the current payoff is added to the component played, while the remaining components are kept unchanged. This mechanism is related to the replicator dynamicsand its convergence has been established for zero-sum 2-player games and for some games with unique equilibria [143, 34].
An alternative approach to the adaptive behavior of finitely many drivers in a traffic network with parallel links was recently proposed in [64]. Each player observes only the travel time for the specific route chosen on any given day, and future decisions are adjusted based on the history of past observations. Specifically, each player has a prior estimate of the average payoff of each move and selects a route based on this rough information by using a Logit random choice. The payoff of the chosen route is then observed and the perception for that particular move is updated. The procedure is repeated day after day generating a discrete-time stochastic process in which the incremental information leads to a change in a state variable that determines player behavior. Since travel times depend on the congestion of routes imposed collectively by all the player’s decisions, the process progressively reveals to each player the congestion throughout the network. A basic question is whether such a simple mechanism based on a minimal piece of information can induce coordination. Using results on stochastic algorithms, it turns out that in the long run these dynamics stabilize on a steady state that can be characterized as a Nash equilibrium for a particular limit game. More precisely, the learning process generates asymptotic pseudo-trajectories of an associated deterministic continuous dynamic system called adaptive dynamics, whose behavior depends on a “viscosity parameter” that represents the amount of noise in players’ choices: if noise is large enough the adaptive dynamics have a unique global attractor which almost surely attracts the sequences generated by the learning process. This approach proceeds bottom-up: a discrete time stochastic model for player behavior gives rise to an associated continuous time deterministic dynamics that lead ultimately to an equilibrium of a particular limit game. The equilibrium is not postulated a priori but it is derived from basic postulates on player behavior, providing a microeconomic foundation for equilibrium.
Building upon these results, we intend to address the following questions:
Recent studies [135] have shown that the standard protocols for congestion control can be reverse engineered as a decentralized algorithm that solves a network utility maximization problem (NUM). These protocols adjust the rate at which each source sends packets, using a feedback mechanism based on different metrics of network congestion. Alternatively, TCP/IP rates can also be seen as an equilibrium of a particular network game for which NUM yields a potential representation similar to Beckman et al’s characterization of Wardrop equilibrium. On the other hand, in current practice the routing of packets is done at an upper layer of the network architecture. We intend to use the learning process and the adaptive dynamics to design a decentralized algorithm for computing a simultaneous equilibrium for source rates and multipath routing of packets.
Ford and Fulkerson [96] introduced the notion of dynamic flow in a network. In their model, we are given a directed graph with a source and a sink , and fixed capacities and transit times on the arcs. In this context, the capacity of an arc bounds the amount of flow that can traverse the arc in a time period. A natural question is to determine the maximum amount of flow that can be shipped through the network in time periods. Ford and Fulkerson provided a polynomial time algorithm for this problem which gave rise to vast research in the area. For instance Gale [104] showed that there always exist an earliest arrival flow, which simultaneously maximizes the amount of flow arriving at the sink before time for all . Wilkinson [214] and Minieka [159] gave pseudo polynomial time algorithms to find an earliest arrival flow, while Hoppe and Tardos [125] present a fully polynomial time approximation scheme (FPTAS) for the problem. Many variants of this problem have been studied. Fleischer and Skutella [95] show, among other results, that for single commodity problems storage at intermediate nodes is unnecessary and design an FPTAS that does not use any. Fleischer [94] noted that in the multi-commodity case (with many sources and sinks) there may not exist an earliest arrival flow, however if there is a single sink it does. Furthermore, Baumann and Skutella [33] recently showed that such flow can be computed efficiently.
Our interest in this proposal is to study equilibrium flows in this dynamic setting, as opposed to the centralized optimization approaches described above. In this context it is simpler to pass to the limit and consider the model as a continuous time problem. Algorithmically this is not a problem as shown by Fleischer and Tardos. Observe that although the transit times are fixed, there is congestion of the following form: the capacity of arc bounds the amount of flow that can enter the arc per time unit. If there is more flow that would like to use the arc, it will queue at the entrance of the arc. Thus, if some flow particle wants to enter the arc at some point in time, it experiences the transit time of the arc plus the current waiting time in front of the arc. It follows that equilibrium is a natural concept in this problem. Furthermore, in many applications, such as evacuation problems, city traffic, and Internet flows, routing decisions are made locally by different agents rather that in a centralized fashion. Questions that we would like to address include: (i) How to compute an equilibrium, (ii) What type of adaptive learning process would lead to equilibrium flows, (iii) Estimate the efficiency and fairness of equilibria.
Machine scheduling problems have their origin in the optimization of production systems and their formal mathematical treatment goes back to the 1950’s. In general, they can be described as follows. Consider n tasks that have to be processed in m machines. Task i requires a certain processing time pij to be completed, if processed in machine j. In addition, task i may have a weight (for instance, a level of importance) wi, a date at which it becomes available or release date ri, a deadline di, or even other characteristics. Moreover, additional constraints may be present among the different tasks, these could include precedence constraints (in which certain tasks have to be processed before others), time windows or processing delays when switching a task from one machine to another. Moreover, there is a variety of possible parallel machine environments. Machines could work in parallel at different speeds, or sequentially in what is called the job-shop environment. Usually, the problem considered has been finding a sequence to process tasks as to minimize:
As the vast majority of these problems turn out to be NP-hard and therefore difficult to handle, a lot of work has been put in trying to design algorithms providing reasonable close to optimal solutions in a limited time period. These algorithms are called heuristics when the goodness of their performance is based in empirical observations, and they are called approximation algorithms when one can mathematically prove that, even in the worst case, they return solutions that are close to optimal.
Recently, distributed environments have emerged as the main architecture for parallel computing. Thus, a central question is to understand machine scheduling problems where each task is managed by a selfish agent, who is only interested in its own completion time. In particular, it is important to understand the existence, uniqueness and characteristics of equilibrium when, given some processing rules, agents play to maximize their own objectives (say jobs minimize their own completion time). In particular, a lot of attention has been put in studying the “price of anarchy”, that relates, in the worst case scenario, the social cost of such a game with the efficient allocation of resources [141]. In such distributed environments, each job j, selfishly minimizes its own completion time Cj. To this end the job takes into account the scheduling policy utilized by machines (i.e., the order in which a machine processes the jobs assigned to it), and the fact that the other jobs are also minimizing their own completion times. Immorlica et al. [127], Azar et al. [24], Dürr and Thang, among others, have established that equilibrium exists under several natural machine policies. However this is not the case for all policies, Azar et al. [24] exhibit simple policies that do not posses Nash equilibria even for two machines. Furthermore, the existence question remains open for some natural policies such as “Longest First”, in which machines schedule jobs from highest to smallest processing time. On the other hand when the social cost is the makespan of the schedule, several authors have established bounds on the price of anarchy of different policies. These bounds are constant for simple machine environments such as when machines are identical, however, in more complex situations most known policies do not achieve constant price of anarchy. Notably, Azar et al. [24] show that, for unrelated machines, no deterministic policy can achieve a constant price of anarchy for the makespan objective, while the existence of a randomized policy with such a desirable property is unknown.
Naturally, algorithmic mechanism design plays a central role here to determine which are the best “rules of processing”, for example, those that minimize the price of anarchy for a given social cost function. What are the basic characteristics of policies that do admit Nash equilibria? These questions where first studied by Christodoulou et al. [60] and have inspired significant further work.
Although the amount of work in the area in recent years is vast, we believe that there are many important issues that have not being addressed. We are particularly interested in designing policies achieving constant price of anarchy for social cost functions different from makespan. In particular, Correa and Queyrannerecently showed that, in the restricted parallel machine environment, and for the sum of weighted completion time objective, the natural SPT rule (shortest processing time first) achieves a constant price of anarchy. We conjecture that is holds even in unrelated machines. On the other hand, it would be very interesting to understand scheduling games in a context of incomplete information, where agents have private information about the characteristics of their jobs and do not necessarily reveal their real processing time, though statistical information is public knowledge. In this context, one would for instance be interested in policies or mechanisms that induce jobs to reveal their true parameters. Several of these issues have been addressed in recent years [121], however we would like to study the Bayesian setting, in which statistical information is publicly known. Finally, we would like to study algorithmic questions related to learning, dynamics and convergence to Nash equilibrium. What are the processes, other than the best response dynamics, leading to an equilibrium? How fast is the convergence? What are natural learning processes in which jobs may acquire information about the underlying policies?
Traditional Cournot and Bertrand market-competition models consider that producers of perfect substitutes of a good decide the quantity they are going to produce or the price at which they are going to sell their production [156]. Strategic consumers eventually learn these decisions and decide whether to buy or not and from who to buy. These models trade off a more realistic representation of the operations of a firm for simplicity. Two important extensions are to consider more general pricing functions that map production quantities to prices (a seminal model put forward by Klemperer and Meyer [140], called supply function equilibria) and the study of localization decisions (pioneered by Hotelling).
The main decision faced by producers is to set a price function in order to maximize their revenue. They submit a price function to the market, indicating the per unit price producer will charge if required to produce units. Thus the goal of each producer is to find such a function in order to maximize anticipating the fact that is an equilibrium. This problem seems quite challenging, although some partial progress has been done for specific price functions. For instance, Johari and Tsitsiklislook at the situation in which producer choose a parameter to obtain a price function equal to , where is the total demand. Also, Correa and Stier-Moseshave looked at the linear case, which is relevant in the context of supply function equilibria. However, very little is known for general pricing schemes. A major goal is to obtain better understanding of good pricing policies. In particular it would be interesting to study the efficiency of equilibria and how competition may affect it.
In [82] it is shown that deciding the existence of a (pure) Nash equilibrium is an NP-complete problem. Later, Mavronicolas et al. [157] were able to characterize all Nash equilibria for the Voronoi game on cycle graphs. We are interested in continuing this line of research by studying Nash equilibria and better/best response dynamics on other special graph classes. We plan to address several issues on Voronoi equilibria. Namely, given a particular graph class (e.g. trees) we intend to tackle the following questions: (i) Is the problem of deciding the existence of a Nash equilibrium polynomial or NP-complete?, (ii) Is the Voronoi game a potential game (no better response sequence has a cycle)?, (iii) Does a better response sequence always lead to a Nash equilibrium starting from any configuration?, and (iv) Estimate the price of anarchy and the price of stability of Voronoi games.
In situations where information is dispersed and/or the control over actions is decentralized, a designer is faced with the problem of choosing “rules of the game” which perform well for different instances of the problem. Good performance requires, in general, truthful information transmission and/or coordination of activities between the different agents. Examples abound: mechanisms for the adequate provision of public goods [169], revenue maximizing mechanisms for the sale of an object [164, 131] and efficient trading mechanisms. More recently, in the context of complete information, people have looked at scheduling systems with a low price of anarchy as a metric [60].
Interestingly, even if the revelation principleopened the door to make these problems tractable, only a small class has been solved and understood (and quite well!). Basically, as long as the problem is static, private information is single dimensional, there is money to compensate agents, and agents’ utilities are quasilinear in money, the problem lends itself to analytical solutions and well understood interpretations. However, many interesting real world problems which fall under the category of mechanism design do not satisfy these assumptions. We list a few of them below.
When an agent sells multiple objects or the buyers care about the identity of the buyer if they do not win the auction, it is natural to think of private information as multidimensional. Typically, private information is a vector with as many components as payoff relevant outcomes there are. In this context, even though a formulation through calculus of variations is known (see [183] for the monopolist problem, [131] for auctions with externalities), almost no headway has been made. Can the optimal mechanism be approximated using integer programming techniques? Is such a problem computationally tractable? How far from the optimal solution is a mechanism with an m dimensional message space if private information is n dimensional, with mˇn?
If buyers with budget constraints participate in a mechanism, the quasilinearity assumption on money must be dropped. Payments that push the agent close to full spending leave an agent with little available income, introducing nonlinearities in their utility function. Although the problem has been recognized as a very important one, almost nothing is known, except for the single agent case. How can techniques from optimal control help to solve the problem? What is the optimal way to sequentially sell objects if buyers face a total expenditure limit?
If the rules of the game are limited to allocation rules, and agents cannot be compensated with money, as is the case in matching mechanismsor scheduling games, the resulting bi-level problem can be quite complicated. The designer must choose rules that perform well under any circumstance (as in the implementation literature, or the classical stable matching problem), perform well in average (Bayesian mechanism design) or perform relatively well in the worst case scenario (minimize the price of anarchy in a class of instances).
Finally, when the situation is dynamic and the designer cannot commit to rules for future periods, the revelation principle fails, and not even a simple formulation has been agreed upon. For a simple context of an auction with a single buyer the optimal mechanism has been characterizedo, but little else is known. What constitutes a general mechanism in this context? How much information is optimal to disclose between one period and the next? Is it possible to give optimality guarantees of “simple mechanisms”?
In little more than a decade we have seen a convergence of social and technological networks, with systems such as the World Wide Web characterized by the interplay between rich information content, the millions of individuals and organizations who create it, and the technology that supports it. One of the outcomes of these changes is a major shift of research in computer science and a focus on issues concerning, among others, efficient access, fast processing, classification, data mining and data streaming of huge volumes of information, and mechanism design. In contrast, in its early years, Computer Science as a discipline focussed on developing the fundamentals supporting programming languages, compilers, operating systems, network protocols, algorithms and computability.
A consequence of the heterogeneity of sources and media, and the lack of a centralized control over the contents in Internet, is that the information available requires novel, more flexible, techniques for querying, navigating, and manipulating it. Maintaining a human-tagged and organized collection of multimedia data (such as images, video, or music) becomes hopeless as the generation rate exceeds human capacity, and automated methods become the only choice. One such approach is content-based searching, where multimedia data can be directly searched without any need of tagging or interpreting it. Among the many techniques for such a search, the combinatorial approach, which reduces the problem to matching discrete multidimensional symbol streams for complex patterns, has shown to be an interesting alternative. This approach is also receiving attention in the more traditional text searching subfield: The rigid keyword-based searching that was the norm a few decades ago is being replaced by the “full-text” paradigm, where the text is seen as a sequence of symbols and any subsequence matching certain pattern can be retrieved. Furthermore, the gigantic amounts of data to be processed, together with the relative slowness of its interconnection network, make an ideal case to study compressed data representations that can be transmitted at reduced cost, and ideally processed without decompressing them.
Also, novel issues arise when dealing with very large complex dynamic structures. For the sake of illustration, consider Kuratowski’s theorem stating that a graph is planar if it does not contain a K5 or a K3,3 subgraph as a contraction. This results does not apply for graphs that deviate very little from those that satisfy the stated hypothesis. The emergence of very large dynamic networks where knowledge of the exact link structure and edge presence is not critical has generated interest in both theoretical foundations and mathematical techniques for understanding large networks which are robust under small changes of the network structure (or underlying object in particular). In other words, there is a need for new approaches that can provide meaningful results concerning large objects about which complete information is lacking. In this context, techniques from areas such as random graphs, phase transition, spectral analysis, probabilistic sampling, etc., have gained enormous relevance (for discussions of the mentioned areas, the reader is referred to [130, 199, 187, 47, 83] and references therein).
Besides the mathematical aspects encountered in the study of large networks, there is also a pervasive need for algorithmic developments associated to such structures. Just to illustrate, major shifts in Internet routing standards can be viewed as debates over the deficiencies of one protocol (which is nothing else than a collection of algorithms) relative to another. Information retrieval from large data sets and distributed networks such as the World Wide Web pose new algorithmic challenges where efficiency and sensitivity to underlying assumptions rise to a new level of importance. Also, distributed procedures performed by uncoordinated agents lacking full information about the system, gain relevance.
In the context discussed so far, there are in fact too many mathematical and algorithmic problems one could address. However, taking into consideration our background expertise as well as the relatively small number of local researchers in the area, we aim to focus on three main topics, spanning; (i) new approaches to fundamental information retrieval problems, specially those concerning large volumes of data (see Section 2.1, 2.2 and 2.3), (ii) sampling techniques as a means of increasing the efficiency of algorithmic procedures either for data processing, ascertaining relevant properties, or identifying local structures in large objects such as networks (see Section 2.4), and (iii) distributed routing and information gathering in unstructured large networks (see Section 2.5).
Traditionally the analysis of algorithms measures the complexity of a problem or algorithm in terms of the worst-case behavior over all inputs of a given size. However, on very large instances, this analysis does not give enough information to choose the best algorithm possible in practice: as all reasonable algorithms behave similarly badly in the worst case. In this context, adaptive analysis consist in identifying if some of those large instances are easier than some others, and how algorithms can take advantage of those to escape the worst case behavior on practical instances.
The simplest example illustrating this approach is based on a generalization of the well known Hanoi Tower
problem proposed by Lucas [150]: given a pile of n disks on a slot A, what is the best way to move the
disks one by one to a slot C using only an intermediate slot B, so that at any time no disk is put
above a smaller one. In the worst case, all the disks are of distinct size, and the optimal solution
requires 2n − 1 moves. In the best case, all the disks are of the same size, and the optimal solution
consists of n moves. One possible adaptive complexity analysis of the problem considers the number s
of distinct disk sizes and shows that the optimal solution requires (2s(n − s + 1)) moves. This
adaptive complexity analysis refines the result of Lucas and yields a better understanding of the
problem.
Whereas the Hanoi Tower problem might seem a bit too abstract, the final objective of adaptive analysis is
to bridge the gap between theory and practice. A very practical and simple problem where this
applies is the search for elements in a very large sorted array of n elements. Whereas binary search
always performs (log n) comparisons to search for the insertion rank p of a particular element x,
Bentley and Yao [40] proposed a family of almost optimal (up to minor additive terms) algorithms
which find this insertion rank in time adaptive to p. Among others, the simplest algorithms of
this family (responding to the various names of “doubling search”, “galloping”, “one sided binary
search”, etc.) finds p in 2log p comparisons, and is widely used in practical algorithms, such as
to merge or intersect sorted arrays (see discussion in next Section). The same type of adaptive
results are easily obtained in more dynamic data structures, such as Brown and Tarjan’s finger
search trees [52], or the more advanced Sleator and Tarjan’s splay trees [198], at the cost of space
to store additional pointers. Barbay and Chen [28] defined similar adaptive search operators to
search for the tangent to a point in a planar upper hull, and to search for the tangent between two
planar upper hull. We aim to further extend the family of adaptive search operators to the third
dimension, considering an adaptive point location problem in Delaunay triangulations, which will be
an essential tool to design input order adaptive algorithms for the convex hull problem in three
dimensions.
Demaine, López-Ortiz and Munro [75] described how to compute a description of the union of k sorted arrays (or, equivalently, of k B-trees) in sub-linear time on easy instances, defining their measure of the difficulty of the instance by the cost of encoding the description of the union. They generalized their approach to the computation of the intersection (and, equivalently, of the difference) or sorted arrays, with applications to the resolution of conjunctive queries in search engines such as Google [76]. Barbay and Kenyon [29] gave two other adaptive analysis of the intersection problem, along with other applications such as to support multi-join operators in relational databases, and to support pattern matching queries in semi-structured data (e.g. file systems, XML). Barbay and Chen [28] further generalized this approach to the merging of planar convex hulls, using as a measure of difficulty the complexity of the certificate of the merging of the convex hulls. To this day the analysis of Demaine, López-Ortiz and Munro [75] and of Barbay and Kenyon [29] are incomparable, in the sense that no algorithm is known to be optimal for both analysis at once. We aim to define a yet finer analysis combining the main features of both analysis, along with the intersection algorithm which will be optimal for those, and the derived algorithms to compute adaptively the merging of convex hulls in two and three dimensions.
In applications sorting huge sequences, those sequences are often nearly in sorted order (e.g. database, information retrieval), or taken from an alphabet of smaller size than the sequence. In both cases, one would expect that such sequences should be sorted in less time than a perfectly shuffled sequence or a sequence with n distinct values. Estivil-Castro and Wood [89] have summarized the many publications showing that adaptive sorting algorithms perform better on almost sorted sequences without any external hint of the disorder of the sequence, and within a factor of the provable computational lower bound derived through combinatorial arguments on the restricted class of instances. Munro and Spira [163] showed that the median based sorting algorithm takes optimal advantage of the entropy of the frequencies of the values used in the sequence. We aim to combine those results to yield algorithms which will be at once adaptive to the order and the structure of the input, and to generalize them to the ordering of posets.
Similarly, many algorithms have been known since the 1970s to compute the convex hull of n points in the
plane with optimal worst case complexity (nlog n), but Kirkpatrick and Seidel [136] described
a better algorithm with worst case complexity
(nlog h), where h is the number of vertices on
the convex hull. Since h is at most n the latter algorithm is never worse and often better than
one which only optimizes for the worst case. Several improvements of [136] where derived (see for
example [57, 213, 56, 191]). Afshani, Barbay and Chan [3] further refined this analysis up to any
random permutation of a particular instance (“input order oblivious instance optimality”), and
generalized this result to the third dimension. We aim to generalize those results to higher dimensions
(useful for multidimensional mechanism design and the computation of correlated equilibria), and to
relate problems where the structure of the instance can affect the quantity of certificates available,
and the easiness of finding one (e.g. maximal vectors, computation of meshes, prefix free codes,
etc).
Combinatorial pattern matching (CPM) encompasses exact and approximate search of sequences for simple and complex patterns, including regular expressions and parameterized pattern matching [70, 168, 116]. It has a long list of applications, the most prominent ones being probably information retrieval to computational biology. CPM has been traditionally dealt with in sequential form, that is, without resorting to any indexing data structure built on the searched sequence. In order to handle the huge amount of data that arise in usual applications, however, index structures must be built on the text [14, 116]. Unfortunately, these classical indexes have a high space usage, which renders them impractical precisely in the applications where they are most needed, namely, those handling huge collections of sequences.
The last decade has witnessed important progress in the application and enhancement of compression mechanisms, especially tailored to deal with sequences, and given rise to so-called compressed self-indexed representations, which require space close to the entropy of the sequence, allow extracting any subsequence of it, and in addition provide indexed searching, that is, in time sublinear in the sequence length (for details see Navarro and Mäkinen [167]. Self-indexing can be generalized to searching non-linear structures such as trees (e.g. [162, 90]) and graphs (e.g. [61]). These have applications in many areas. For example, compact tree representations are useful for representing semistructured text databases (e.g., XML and HTML) in little space [90]. Compact graph representations allow one to solve large problems involving networks within main memory, which is relevant given that most graph algorithms are not disk-friendly. As an example, Claude and Navarro [61] showed that compressed Web graph representations offer 10-fold compression, so that key algorithms for Web information retrieval like PageRank, HITS, as well as techniques for detection of Web spam and Web communities, can process much larger graphs and thus achieve more precise results.
Similarly, the study of combinatorial properties of images, grids, sets, permutations, mappings, partial sums, and other basic data structures allows for compressed representations of those structures while retaining direct access and navigation operations on those. The applications of this area include complex searches in computational biology, multimedia databases and structured text retrieval, as well as any area where the amount of data to handle makes compressed data representations relevant, or even mandatory. We intend to further contribute in the understanding of self-indexed representations, including how they connect compressibility and adaptivity, and explore new application domains. Barbay and Navarro in [30] have already advanced in this promising line of research. Specifically, they considered adaptive sorting algorithms and data-structures encoding permutations, to obtain a compressed data structure for permutations and multi-sets which supports the application and inverse application of the permutation in reasonable time, all the more faster when the permutation is more compressible. This approach yielded an improved sorting algorithm.
An extension of the formalism developed for adaptive algorithms defines the difficulty of an instance as a function of the stability of the output under small random perturbations of the input (smooth complexity). The concept has been independently reinvented to study Machine Learning problems and to express the quality of the solution produced by an algorithm, as a function of the difficulty of the instance (as opposed to the traditional approach that uses the quality of the optimal solution to this instance). We aim to formalize the link between these two approaches, and explore alternative combinations of analysis.
Smooth analysis was pioneered by Spielman and Teng [200] as a framework aimed to explain the practical success of heuristics that do not admit traditional worst-case analysis (a goal which is similar to that of adaptive analysis).
Smoothed analysis is a hybrid of worst-case and average-case analysis that inherits advantages of both, by measuring the expected performance of algorithms under slight random perturbations of worst-case inputs. If the smoothed complexity of an algorithm is low, then it is unlikely that the algorithm will take long time to solve practical instances whose data are subject to slight noises and imprecisions. For instance the simplex algorithm runs in exponential-time in the worst-case and yet in practice it is a very efficient algorithm. This was one of the main motivations for developing smoothed analysis and the celebrated proof that the simplex algorithm has polynomial time smoothed complexity [200].
Researchers have investigated the smoothed complexity of several problems, mostly numerical ones (e.g. [21, 53, 54, 20]) and some discrete problems (e.g. [27, 153, 201]). However, there are very few instances where smoothed analysis has been used to tackle problems concerning sequences. A notable exception is the work by Andoni and Krauthgamer [13] who initiated the study of the smoothed complexity of sequence alignment. In the pattern matching literature one often finds problems where there is a significant gap between worst-case and average-case algorithmic complexity while practice suggests the latter is most relevant (e.g., in sequential searching both in the exact and the approximate case – for references see the recent text on the subject by Crochemore [71]). Smoothed analysis seems specially well suited to help explain such state of affairs. We intend to explore such issues, first for the case of exact string matching algorithms, and then in the approximate matching case. More general variants might also be considered, such as multi-pattern matching, matching with don’t cares, regular expression matching, tree pattern matching, etc. Another problem is to determine the smoothed complexity of nearest neighbor search algorithms.
Testing and property testing, also referred to as the art of uninformed decisions [91], are rather new areas of computational theory that deal with the information that can be deduced from the input given that the number of allowable queries (reads from the input) is significantly smaller than its size. There is by now a large body of literature on this subject (see e.g. the surveys of Goldreich [107, 108, 109], Fischer [91], Kiwi et al [137], Ron [186, 187] and Rubinfeld [189], and references therein). One of the area’s aim is to devise sub-linear time algorithms by avoiding having to read all of the input of a problem instance. The algorithms return a result without processing the whole input, thus lacking full information about the instance they are running on. Usually these algorithms use randomization and provide answers that are approximate, or wrong with some probability.
Kiwi, Navarro and Telha [138] considered the approximate string-matching problem (in its on-line version where the text is not pre-processed) under a relaxed scenario that is still useful for most applications. Specifically, they relax the goal of listing all substrings of the text T at (edit) distance at most k from pattern P to that of listing each such position with probability at least 1 − ϵ, where ϵ is a new input parameter. Using sampling techniques motivated by the testing paradigm they show how to beat the optimal expected time approximate string-matching algorithm of Chang and Marr [58]. We intend to further develop this approach and apply it to other pattern matching type algorithms in particular and information retrieval procedures in general. Specifically to: (1) approximate string matching for more general string distance metrics, and (2) indexed algorithms (where the text can be pre-processed) for approximate string matching. The latter question is not even well posed and seems to require conceptual developments that might prove useful elsewhere.
A celebrated recent result due to Alon et al. [7] (see also [8, 49]) completely characterizes which properties of dense graphs are testable in terms of Szemerédy (regularity) partitions [46, IV.5]. To the best of our knowledge, no similar result is known for properties concerning ordered structures such as sequences and permutations. Given our interest in testing and pattern matching, it would be natural to address this later anomaly in the development of the theory of property testers.
One of the reasons that the testing and property testing paradigm has caught the interest of the Computer Science community is partly explained, as previously mentioned, by the changing nature of the challenges the advent of the Internet has brought upon computer science in particular and society in general. The emergence of huge computer networks has enabled both the study and creation of very large and complex social interactions. Given the size and dynamic nature of such networks, either knowledge of their precise structure is impossible, or its algorithmic use impractical. The testing paradigm is one of the approaches that have been put forth in order to algorithmically ascertain characteristics of the aforementioned structures. A better understanding is needed on how to handle as well as extract information from such objects. A somewhat (loosely) related approach consists in using randomized sampling techniques to drive a local exploration of a large structure (e.g. a network) in order to extract a smaller profile from which to identify a desired substructure (e.g. a network cluster). For example, identifying dense subgraphs is an increasingly important task in the analysis of large networks. In particular, dense subgraphs often form the cores of larger communities or network clusters. In [11, 12] a local algorithm is given for finding dense subgraphs of graphs according to a definition of density proposed by Kannan and Vinay [132]. However, the proposed algorithm requires complete knowledge of the link structure of the graph in the neighborhood where a dense subgraph is being sought. In several scenarios, access to the graph’s link structure carries a cost. We would like to design a local sampling algorithm that also finds dense subgraphs which however accesses a sub-linear number of its links (this could be useful when there is a cost associated to knowing links incident to nodes).
The problem of designing algorithms that work well without knowing the exact instance of the problem is one of the main issues that arise in mechanism design, where one has to choose the rules of a game without knowing the exact instance that one is facing, and can also be connected with the questions that arise in online optimization, where decisions are taken without full knowledge of the future. We shall further elaborate and delve on this point later on in this proposal.
Network routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data networks (such as the Internet), and transportation networks. In computer networks, at least in a broad sense, routing is concerned with the flow of data, and the design and analysis of the protocols that govern it. Network routing is a well developed and very broad subject. Our interest on this topics focusses on locality of information and how it affects data flows. Below we describe a sample of problems that illustrate well our interest in the subject.
Routing strategies for navigating synthetic or real-worlds networks specify some rules to move in a graph from a given vertex towards a target vertex along paths close to optimal. Even though full-tables allows to route optimally, their use in large networks is very limited since they require to store locally O(nlog k) bits (in a network of size n and maximum degree k). Practical routing strategies are forced to make assumptions on the structure of the network in order to guarantee delivery of the message or to route using suboptimal paths. Dragan and Matamala [81] proposed a new routing strategy where distances in a spanning tree are used as global information while routing decisions are taken locally. In this approach the local memory requirement is reduced to O(k log(n)) bits plus the information for computing distances in the spanning tree (O(log 2(n))). Even though message delivery is guaranteed, delays could be larger than the length of the shorter paths. One specific aim on this proposal is to enrich our knowledge about classes of networks where we can guarantee that our strategy is optimal. We have already established that this is indeed the case for some well-structured networks like grids and hypercubes, and dually chordal graphs.
In rapidly changing dynamic networks memory efficient routing strategies such as the one discussed in the previous paragraph are impractical, mainly because of the difficulty of efficiently refreshing the information stored at the network’s nodes. A prime example of this are unstructured Peer-to-Peer networks formed dynamically by ad-hoc additions of nodes and where no algorithm is provided for organizing or optimizing network connections. When a node (referred to as peer in this setting) wants to find a desired piece of data in the network, the query has to be flooded through the network to find peers that have the sought after resource. We would like to formally derive some of the results announced in [115]. Specifically, we would like to explain how does the optimal routing topology change as a function of the network load, and design protocols that would drive the network towards near optimal interconnection patterns.
One typically visualizes communication networks as distributed systems where nodes correspond to agents or processors that can only interact locally. Several theoretical and over-simplified models were conceived for studying particular aspects of protocols such as fault-tolerance, synchronism, locality, congestion, etc. In particular, in a seminal paper Linial [147] introduced the free model (also called LOCAL, see [178]) where agents synchronously communicate through network links during discrete rounds in which every processor can send a message of unbounded size through each of its outgoing links. In contrast, in the model CONGEST (see [178]), each message sent by a processor in a network of size n has size O(log n). By changing a little bit the CONGEST model and making it more powerful, specifically adding a universal processor connected to every other processor, we aim to determine conditions under which the universal processor can gather all the information needed to fully reconstruct the network. If this is not possible, we would like characterize which properties of the underlying network (e.g. connectivity, acyclicity or planarity) can be determined. We shall explore similar questions in yet another model where a universal processor is added and communication between nodes in the original network are forbidden. Such networks have diameter 2. Therefore, the complexity due to the large diameter of the system disappears allowing us to exclusively focus on the issue of local information.
What we know today as Mathematical Programming was probably born in 1947, when G. B. Dantzig [73] invented the “simplex method” for solving general Linear Programming problems (LP), while working on U.S. Air Force planning problems.
Three key ingredients related to this event were (1) the discovery that many relevant problems related to finding the best allocation of scarce resources can be modeled as linear optimization problems [133, 134, 209, 211], (2) the theory of linear programming, which has provided deep and solid foundations for designing algorithms [103, 210], and (3) the first computational method able to solve practical-sized problems. Nowadays, the field is an important part in many industries and research disciplines, from Air Lines and Mining to Economics and Biology. The INFORMS (institute for operations research and management science) society has put up a website (http://www.scienceofbetter.org) which showcases success stories of mathematical programming applications in business and management science. This success has been fed by the interplay of convex analysis, optimization, algorithms, data structures and complexity.
The last twenty years have seen key advancements in the field: On the modeling side, new approaches for modeling uncertainty have focused on the robustness of the solution to problems under uncertainty [6, 35, 36, 37, 172], while new developments on Stochastic Optimization theory [44, 170, 177, 192] has facilitated its relevance to new applications and for larger problems [4, 36, 42, 55, 106, 196]. On the algorithmic side, the development of Interior Point Methods has produced new (and significantly faster) algorithms for traditional LP, and led to the developement of efficient algorithms for broader classes of convex optimization problems including Second Order Cone Programming (SOCP) [6] and Semi-Definite Programming (SDP) [208]. Furthermore, improvements in algorithms, polyhedral theory and data structures [45] have led to spectacular successes in some areas, and important improvements to our ability to solve general Mixed Integer Programming problems (MIP) [15, 16, 17, 45, 165]. These new developments, together with the constant hardware improvements, suggest that it is possible to achieve a qualitative leap in the type and size of the optimization problems that will be solved in the future.
However, being able to achieve this qualitative leap and address the necessary open questions requires, more than ever, a coordinated effort to exploit the interplay of the main areas of this research project; namely, Algorithms, Combinatorics, Game-Theory and Optimization. We present below a summary of our research agenda on mathematical programming. This research touches on different types of problems: nonlinear programming, stochastic programming, mixed integer programming; addresses questions about size, uncertainty, sensitivity, robustness and complexity of a problem; and investigates better solution algorithms.
Since the original emergence of the Simplex Method, several important developments have been made in the form of algorithms to solve general constrained problem, even with nonlinear constraints. Developments on general penalty methods combined with projection as well as lagrangean and bundle and Proximal methods have been important for the real possibilities of solving large scale optimization problems, see [123, 155]. The emergence of Interior Point Methods has been one of the most important contributions in recent decades. Although they become popular in the context of linear optimization, they are recognized today as an algorithmic tool with the potential of addressing problems like SDP and other general conic constraints, [188, 215]. They have also been extended to general nonlinear context[referencia]. Theoretical foundations for Interior Point Methods have been solidly developed, see[171] and implementations have found its way into the most important commercial solvers. It has also been important the connection of Nonlinear Methods, like IPM, with the context of combinatorial optimization as some nonlinear relaxations of combinatorial problemas can be solved with IPM[105].
From the seminal works [155, 160, 161], the Proximal Point Algorithm (PPA) has proved to be a useful tool for solving convex optimization problems and more general monotone variational inequalities as well. Several variants of the PPA have been successfully used for constrained problems with a certain structure on the constraints; see, for instance, [9, 63, 128, 139]. For problems on a product space it is useful to combine the proximal point algorithm with the augmented Lagrangian approach [59, 184, 216], especially under separability assumptions which are present in many practical problems from optimization, game theory, optimal control, fixed-point theory, among others [22]. It is well known that decomposition methods have an impact on the performance of PPA-type algorithms [85]. We pretend to investigate this type of methods combining penalization, projection-extrapolation steps, relaxation and augmented Lagrangian techniques. We shall address theoretical questions and the numerical implementation of these algorithms. In particular, we shall include implicit-explicit versions and bundle methods for nonsmooth problems [174].
Today, truly large mixed integer linear programs can be solved with the help of decomposition based solution methods. This success, however, has not translated to large-scale non-linear optimization problems, where the largest problems that can be solved remain a few orders of magnitude smaller than linear programs, and that is for problems with special structure (e.g. conic). Similar decomposition methods have been developed for nonlinear programming problems, which exploit problem structure such that the problem is separable with the exception of a few global variables or constraints, see [206, 50, 77]. The formulations to adapt Lagrangian relaxation and branch and bound methods for non-linear problems are straightforward, however their success is limited. In part a decomposition method requires the repeated solution of closely related problems. Therefore to develop efficient non-linear decomposition methods it is necessary to: (1) create trully fast non-linear optimization or approximation algorithms, (2) speed the solution to subproblems by using information from prior solutions to closely related problems, (3) exploit the problem structure, perhaps through measures of the geometry of the problem, to guide the decomposition method.
When Interior Point Methods are used within other procedures like inside a Branch and Bound tree to solve some integer variables problem, the problem of using currently available solutions as starting points is very important. The Simplex Methods is very efficient on this. By warm start, we mean the ability to use the solution from the parent node to accelerate the solution of a child node in the branch and bound tree. However, when the subproblem is a QP or SOCP, the simplex method is no longer an option and active-set methods for nonlinear programming (e.g., extensions of the simplex method to nonlinear problems) are often too inefficient for large-scale problems. We propose instead to use interior-point methods, as these methods have proven to be the preferred, and often the only effective, method for solving large-scale second order conic programs (SOCPs) or largescale quadratic programs (QPs). (CPLEX 10.1 [126], for example, uses a barrier algorithm by default to solve QPs.) The challenge, is to develop effective warm start procedures for interior-point methods in the branch and bound context, so that we can solve a sequence of closely related SOCP or QP subproblems very quickly. Despite some successes [39, 62, 113, 217, 218], it is still not known how to most effectively warm start interior-point methods. In particular, we are interested in warm start heuristics that give good practical performance in the context of branch and bound methods for nonlinear optimization models with special structure - an area where warm starting has not been thoroughly studied. We propose to develop effective techniques for warm starting interior point methods in the context of branch and bound methods, where the perturbations to the problems are often relatively simple. In particular, we will investigate how previous warm start techniques for linear programming [217, 218] work for QPs, SOCPs and more general convex, nonlinear models. In addition we will develop new and inexpensive strategies for shifting the initial point in the warm start procedure, so that it is sufficiently away from the bounds for the new perturbed problem, while remaining close to the solution. We will establish theoretical conditions which guide our warm start strategies - our emphasis however, will be on techniques, that result in good practical performance.
Another important area in Nonlinear Optimization, which also connect with Integer Programming has been the whole concept of Decomposition Methods. Decomposition methods for linear optimization problems date back to the early days of linear programming, with the column generation approach of Dantzig-Wolfe [74] and the constraint generation algorithm of Benders Decomposition [38] being two of the most widely known. In addition, a number of solution approaches for large linear, mixed integer problems use decomposition techniques such as Lagrangian relaxation and branch and bound [5, 41, 43]. There is a large literature of problem specific algorithms for different large-scale logistics problems that apply a variety of reduction, aggregation, and decomposition methods to exploit the specific problem structure. Some notable successes are the branch-and-price-and-cut [32, 31], composite variable modelling [18], approximate dynamic programming [195], and other application specific approximation methods [129]. In addition, some large-scale solution methods that exploit the special structure in stochastic programming have been explored [179].
Another important area in the context of Interior Point Methods are some complexity questions. We want to explore the relations of some geometric measures on the complexity of interior point algorithms and related procedures, when used to address questions about the feasibility of the set S. These measures corresponds to radius of inscribed and circumscribed ellipses, a concept which has been used before in convex optimization. We are particularly interested on obtaining lower bounds to the complexity as upper bounds have been established already. Existing results for the Ellipsoidal Algorithm shows that the above defined geometric measures can be used in connection to the number of iterations required by the algorithm. These results say that for a certain class of instances with “poor” geometry, there must be at least some instance for which the algorithm will require long time, see [100]. We expect a similar result to exist for a class of interior point algorithms as well as other related algorithms based on barrier functions. The approach will have to be different as in the case of the Ellipsoidal algorithm, though. Of particular relevance for this line of research are the work of Todd [207] and Nesterov and Todd [172].
Classifying data is a common need in machine learning. For given data points, each of which belongs to one of two classes, the goal is to determine some way of deciding in which class a new data point will be. In the case of classical support vector machine (SVM) methods, data points are viewed as n-dimensional vectors and the two classes are separated using (n − 1)-dimensional hyperplanes, called linear classifiers. Since there can be many hyperplanes that classify the data, in order to reduce the misclassification errors, one looks for a maximal separation (margin) between the two classes, by choosing the hyperplane that maximizes the distance to the nearest data point. In general, the training data may not be linearly separable and cannot be separated without error, then one can still find an hyperplane by relaxing the separation constraints [69].
However, in many situations, the training data is not known with certainty. Suppose that instead of the exact pattern we only have a distribution over a random variable. In this case we may replace the exact margin constraint by a probabilistic constraint requiring that the random variable lies on the correct side of the hyperplane with high probability (greater than some pre-established value), which yields a good classifier with a low probability of making errors. Unless we make some further assumption or approximation on the distribution, this chance-constrained program is too difficult or even impossible to solve directly. In [196, 166] the authors use Chebyshev inequalities to derive an equivalent Second Order Cone Programming (SOCP) formulation. Provided that the second moments of the random variables are known but under lack of additional information on the actual distribution, the SOCP optimization problem considers the worst distribution in the class of all distributions which have common mean and covariance. Such formulations are robust (see [35, 36]) and can be geometrically interpreted as requiring that all points in a suitable ellipsoid occur on one side of the hyperplane. Similar SOCP formulations can also be derived for designing regression functions which are robust to uncertainties in the regression setting, and can be specialized to the important case of missing values in observations for both classification and regression problems.
Under ellipsoidal uncertainty, the resulting SOCP can be solved using general interior point optimization methods as SeDuMi (see [204]), but in the case of large data sets it would be very useful to specialize them to obtain more efficient algorithms that exploit the particular structure of these problems. On the other hand, there are situations where the uncertainty is described by more general non-ellipsoidal sets, so it would be interesting to find tractable formulations and to develop efficient algorithms for such problems. Along a different direction, although it is possible to extend this approach to find primal kernelized robust formulations for designing nonlinear classifiers or nonlinear regression functions, it is not clear how to solve them efficiently by dual approaches as in the non-robust standard case.
Instead of a standard SVM that classifies points by assigning them to one of two disjoint half-spaces, points can be classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible [152]. We plan to investigate robust formulations of such Proximal SVMs methods. Finally, we plan to apply non-smooth optimization techniques to some classification problems where the natural choice for the decision function is non-differentiable, as in some insensitive formulations of regression problems that tolerate a small error in fitting the training data set, by using specialized versions of the so-called variable metric bundle methods [144].
The practical solution of bilevel programs poses extremely difficult computational challenges. Indeed, the second level optimal decisions define non-convex constraints on the problem, which combined with the large size typical of a realistic instance and a reasonable representation of uncertainty, leads to mathematical programs that challenge the state-of-the-art solution methods.
Robust optimization models exhibit two attractive properties: a) they are able to deal uncertainty without an increase in the difficulty of solving the problem, and b) they yield solutions that are insensitive to the uncertainty considered and thus are potentially efficient in practice. Recent work on robust Nash equilibrium [4] and network games, suggests that similar models could be developed for bilevel programs.
While it is known that robustness of a solution comes at a price, there is little understanding of the cost involved in a robust solution for regular optimization models. Even less is known about this price of robustness for bilevel programs. To define the cost of a robust solution, we consider a nominal data and an uncertainty set that defines the range of possible deviations for this data. The price-of-robustness is then the increase in cost between the robust solution and the optimal solution for the problem with nominal data. It has been observed in practice that the extra cost of a robust solution is small [36, 42, 106]. However, there is scant theoretical understanding of the price-of-robustness [35] and to date there is no connection between theoretical bounds and the observed price-of-robustness.
A missing ingredient is being able to determine a priori if a given bilevel problem will be robust or sensitive to data variation. Such information is available for convex and conic programs through behavioral measures capable of bounding the size of the solution and its change due to data perturbation [181, 98]. A key difficulty in extending these measures to bilevel programs comes from the non-convexity involved in the complementarity constraints. A few research directions that we plan to address are: obtaining approximate bounds for problems with approximate complementarity constraints, using homotopy methods to work on the sensitivity with penalized complementarity constraints, and exploiting the sensitivity analysis from convex relaxations of the bilevel program. In particular a bilevel programming problem can be expressed as a mixed integer programming problem, where the integer variables are used to represent the complementarity constraints. We believe that the sensitivity analysis on the linear or disjunctive programming relaxations of this MIP can provide meaningful information of how the integer problem changes.
Sensitivity analysis investigates the behavior of optimal solutions when the data of an optimization problem is subject to perturbations. The abstract results, although very general in their statements and assumptions, are not always easy to apply in practice as they fail to exploit the particular structure of specific classes of optimization problems. For instance, robust optimization leads to the study of problems that, although more complex than the original ones, are still highly structured. A robust version of a linear program yields a linear second-order cone program (LSOCP) that is still convex but no longer linear [35, 148]. LSOCP is a particular instance of a larger class of optimization problems called conic programming, which also arise frequently in other areas such as structural optimization, optimal control theory, and combinatorial optimization, among others. Particular subclasses of conic programs include second-order cone programming (SOCP) [6], and semi-definite programming (SDP) [208].
We plan to investigate sensitivity analysis of nonlinear conic programs, by studying the behavior of the critical points and solutions of a conic program when the data (objective function, constraints, etc.) is slightly perturbed. Our main goal is to study some regularity properties that have proved useful in the context of classical nonlinear programming. Among these, we mainly focus on Aubin’s property and the strong regularity property at a given solution of the corresponding necessary optimality condition, namely, the Karush-Kuhn-Tucker system (KKT). Aubin’s property describes a Lipschitz behavior of the multifunction that associates the solution set of the KKT system to each canonical perturbation of the data, while strong regularity describes this same type of behavior for the solution set of a linearization of the perturbed KKT system. Strong regularity has been studied thoroughly and characterized in terms of second order optimality conditions in the cases of nonlinear programming [80], for SOCP [48] and for SDP [205], but it is still an open problem in the general setting of conic programming. On the other hand, Aubin’s property has been only characterized for standard nonlinear programs [80].
These characterizations allow relating the different stability properties among them, and provide practical ways to check whether a given problem is stable under small perturbations. For instance, the criteria developed in [176] permits to test whether a SOCP problem satisfy Aubin’s property. The line of research proposed here will focus on finding implementable tools to test these properties. We expect that this study will also lead to a better understanding of other related behavior measures [181, 98] and their implications in robustness and sensitivity estimations. This is an unexplored line of research that we aim to investigate in this project.
One important characteristic of several optimization problems is the fact that input information is typically given in the form of some numerical data, part of which might be subject to uncertainty, variability or measurement errors. Hence, an immediate question that emerges is how sensitive is the solution of the problem to changes in the data, that is, how stable is the problem. It is a relevant question to investigate which problem characteristics can be used to predict stability.
This research line proposes to consider geometric characteristics of the problem instance, which can be measured by means of included and containing ellipsoids with respect to the feasible set of the problem. These constructions are common in convex optimization, see Groetschel et al. [114] and Dvoretszki [84]. We are particularly interested in the optimization problem max{cx : x ∈ S}, where S is a convex set described in “linear” terms, like a polyhedron, for instance. This setup is general enough to consider other formulations in which the set S might represent linear matrix inequalities or conic inclusions, which covers many other problems in convex optimization, like Semidefinite Programming, see Ben-Tal and Nemirovski [37].
We propose to investigate the effects and explanatory power of the above mentioned geometric measures in the stability of the optimization problem. These measures have already been shown to have an effect on complexity of some algorithms for convex optimization, see Freund and Vera [99, 100]. A problem which is relatively stable represents a “robust” model and it is very important from a practical point of view to be able to assess the robustness of a problem. On the other hand, the recent methodology of Robust Optimization provides the elements to compute robust solutions, that is, solutions which are immunized to changes in the data within a certain range. For an optimization problem which is “very sensible” or “unstable”, robust solutions might came at the price of a severe deterioration in the objective function. Hence, the analysis of robustness of the problem and its sensitivity is relevant in the context of Robust Optimization or other methodologies which can be connected to data variability or uncertainty issues. The connection of these geometric measures with other stability measures for a problem is also a relevant topic, as well as the actual computation of estimates for problem robustness, even considering special problem structures.
We also want to explore the relations of those geometric measures on the complexity of interior point algorithms and related procedures, when used to address questions about the feasibility of the set S. We are particularly interested on obtaining lower bounds to the complexity as upper bounds have been established already. Existing results for the Ellipsoidal Algorithm shows that the above defined geometric measures can be used in connection to the number of iterations required by the algorithm. These results say that for a certain class of instances with “poor” geometry, there must be at least some instance for which the algorithm will require long time, see [100]. We expect a similar result to exist for a class of interior point algorithms as well as other related algorithms based on barrier functions. The approach will have to be different as in the case of the Ellipsoidal algorithm, though. Of particular relevance for this line of research are the following references: Todd [207] and Nesterov and Todd [172].
Stochastic Programming (SP) is the study of decision making under uncertainty, combining mathematical programming and probability theory. Unlike other approaches dealing with uncertainty, such as robust optimization, the random elements of SP models have known distribution function. SP theory offers a variety of models to address the presence of random data in optimization problems such as chance constrained models, two and multi-stage models and models involving risk measures (conditional value-at-risk, mean-absolute deviation). Excellent surveys of the subject can be found in [192] or [44].
The focus of the investigation will be Chance-Constrained Programming (CCP). In these models, we say a point is feasible if it satisfies a constraint with sufficiently high probability. Chance-constrained problems are usually intractable to solve directly and sampling methods have been successfully employed to obtain good candidate solutions as well as bounds for the original problem (see [170, 151, 177, 55]).
As of yet, the existing algorithms for chance constrained programming have limitations. While some only accept as input random parameters that follow a multivariate normal distribution, others do not allow randomness multiplying decision variables, which, for example for financial applications, is a severe limitation. We plan to investigate theoretical properties of CCP in order to develop sampling algorithms to obtain good candidate solutions for such problems. More specifically, the goal is to determine the number of samples and and the number of problems one needs to solve in order to have good solutions and bounds with high probability.
We also want to explore the use of CCP in practical problems. As mentioned in 3.2, the classification of data can be made probabilistically, with each point belonging to a class with a user-input probabilistic threshold. In telecommunications, it is common to have a network of links and nodes arranged so that information can be transmitted from a source to one or several destinations. Those links have maximum capacities and if such amount is exceeded the connection is blocked. In the context of CCP, we are interested in modeling and solving efficiently the problem of allocating capacity in a network at minimal cost such that the probability that a link is blocked is less or equal than a pre-defined reliability level.
Many real-world problems are modeled and solved as mixed integer linear problems. The efficacy of the approach is largely due to the ability to solve efficiently the resulting linear problems, together with the ability to exploit some common sub-structures to strengthen the linear formulations by generating cutting planes. The importance of finding good polyhedral descriptions and separation algorithms is crucial for problems where an optimal solution is required. Possibly the best known example is the Traveling Salesman Problem (TSP) for which the current technology allows to compute optimal solutions for large scale instances, using the many facet-defining inequalities [165] and separation heuristics that have been proposed and implemented, together with techniques such as aggregation, rounding heuristics, and special purpose branching mechanisms [16]. This combination of techniques have led to a spectacular success for the TSP, including solving to optimality problems with up to 85,900 cities and getting solutions within 0.1% optimality gap for instances with up to 1,000,000 cities.
Unfortunately, the introduction of uncertainty in combinatorial problems through extended scenario formulations leads to very large models for which most of the known classes of valid inequalities have proven to be weak. However, these extended models are not completely general and they usually come in the form of several problems tied together by common capacity (or in this case probability) constraints, which in many cases include integer or semi-continuous variables. Moreover, in most cases the original deterministic problem naturally has a large network-like structure that interacts with the global side-constraints. This kind of structure is very common in practice, some examples include open-pit mine production planning, central power generating planning, time-expanded production scheduling problems among many others.
We propose to study the polyhedral structure of this kind of problems, both from a theoretical side, finding facet-defining inequalities and separation algorithms; and from a practical side, by implementing and testing these ideas on particular problems. Other related questions deal with approximating the true optimal value of these problems, may it be by simple aggregation techniques, or by using sampling techniques; we hope to find both good theoretical bounds, and practical evaluation of newly proposed and existing mechanisms. In particular we expect to investigate how fast the theoretical convergence is and how fast is the practical convergence.
The most successful approach to solve general MIP today is branch and cut, where general cutting planes are a crucial factor for the overall performance. After the great success in the 90’s of using general purpose cutting planes such as GMI cuts [110, 45], a great deal of research was devoted to extend those ideas to find other families of general cuts that could consistently outperform GMI cuts. However, results have been mixed, and although there are several extensions that in theory are at least as good as GMI cuts, in practice they do not seem to offer much advantage. Most of these extensions have focused on deriving inequalities from the master cyclic group problem introduced by Gomory and Johnson [112], which look at problems with a single linear constraint.
The theoretical importance of looking at multi-row relaxations has been proved in a number of works. Cook et al. [66], show an example with infinite Chvátal-Gomory rank (i.e. obtaining the convex hull of the integer points by adding inequalities derived from one row relaxations is impossible). However, Andersen et al. [10], prove that by looking at inequalities generated from two row relaxations, the convex hull of the Cook-Kannan-Schrijver example, can be obtained by adding a single cut. Yanjun Li and Jean-Philippe P. Richard [145] extend this situation to higher dimension.
An interesting recent development has been the work of Cornuéjols and Borozan [67] and Gomory [111]; who have proposed to look at the so-called infinite relaxation problem, which was also introduced by Gomory and Johnson [112]. A first property of this relaxation, is that it considers several constraints at the same time, thus including cuts as in [10]. Secondly, it focuses on the relation between a few integer variables and many continuous variables at the same time, this may be relevant, since most problems do have integer and continuous variables. Cornuéjols and Borozan [67] show that any minimal valid inequality for the relaxation can be related to maximal, convex, lattice-free polyhedra; thus identifying relevant inequalities with simple geometrical entities, moreover Cornuéjols and Margot [68] provide a description of all facets of the infinite (and finite) relaxation with two constraints, while Dey and Wolsey [78] studied the case when some other variables are integer.
Espinoza [87] has shown that, in practice, these cuts can have a positive impact in total running time and final gap for general MIP problems; however, these experiments suggest that looking at relaxations using more than two rows is key. Moreover, unlike Gomory cuts, the separation problem does not have an analytical solution, forcing the use of separation techniques such as in [15].
We plan to further investigate approximations to the separation problem of these inequalities, trying to find theoretical approximation factors and evaluate them in practice; focusing on multiple row cases, and in practical implementations for use in general MIP problems.
An alternative to using new classes of cuts is to use Gomory cuts more aggressively. However, these are numerically unstable and prone to rounding errors leading to invalid cuts (see [154]). To work around this problem, Dash, Cook, Fukasawa and Goycoolea [65] have shown that controlling rounding modes it is possible to add numerically safe Gomory cuts in a much more aggressive manner, obtaining significantly stronger bounds. Yet another alternative is to apply the “MIR procedure” to non-optimal tableau rows. In an ongoing study, Dash and Goycoolea have shown that with this type of procedure it is possible to significantly improve bounds obtaining just rank-1 cuts. Also, they have found that with this type of procedure it is possible to generate globally valid Gomory cuts by applying their procedure to relaxations of nodes in a branch-and-bound tree.
We are interested in further studying the Gomory mixed integer rounding cut, and enhancing its computational performance. We expect these studies will lead to new and effective strengthening for large MILP problems.
One of the drawbacks of branch and bound algorithms, is that either they terminate quite early, or the improvement over time (on the proven quality of the solutions) goes to zero as the branch and bound tree grows. This situation is common to many very hard MIP problems such as the Traveling Salesman Problem [16], the football pool problem [117, 146], among many others.
One possible way to improve over this situation is to resume the information of a partial branch and bound tree in a more succinct way. This has been attempted by many authors using different techniques, including fathoming propagation, conflict analysis, and SAT-related techniques (see [2, 1, 79]).
However, at least in theory [114], it is possible to transform a partial branch and bound technique into a set of cuts to the original problem. Although the efficacy of such an approach remains unknown, at least for for very hard instances of the TSP [16], related techniques have been used with great success, leading to the solution of the complete TSPLIB [180, 17].
This success gives some hope that such an approach might work for general MIP problems. Our aim will be on practical implementations of these techniques for general MIP problems, including extensive evaluation on standard testing sets.
A key element to solving large-scale MILP problems is to generate feasible solutions quickly. In order to find such solutions, many different heuristics have been proposed for general MILP models. See for example Balas et al. [25], Balas and Martin [26], Danna et al. [72], Fischetti et al. [92], Fischetti and Lodi [93], Hansen et al. [119], Hillier [122], Lokketangen and Glover [149] or Saltzman and Hillier [190]. We focus on two heuristics for MILP problems: aggregation and local-search.
Aggregation is well studied for linear programming (LP) problems, where variables and constraints of large LP problems are aggregated in order to obtain a smaller and simpler LP. Most work on aggregation is focused on the optimal way to aggregate variables (Storoy [202, 203]) or improving bounds on the gap between the aggregated and the original model (Zipkin [220], Mendelssohn [158]). A survey appears in Rogers et al. [185]. With LP aggregation it is possible to obtain bounds on the problem.
In integer programming, aggregation techniques have not been as studied as in the linear case. However, aggregation has been applied successfully to specific problems where structure is exploited. For example, aggregating period constraints in a multi-period model of distribution of goods (Kulkarni and Mohanty [142]) or mine planning (Newman and Kuchta [173]), among others. In network optimization problems, a classical problem that has been studied is the large-scale transportation problem. In Zipkin [221] and Zipkin [222], nodes of a transportation network are aggregated and new costs are calculated as a convex combination of costs in the original network, obtaining bounds for the original problem. Moreover, in Shapiro [194] this aggregation is combined with location decisions using Bender’s decomposition method (see Shapiro [193]) to solve the capacitated plan location problem. In Espinoza et al. [88], an aggregation technique is presented to solve a multi-commodity network flow model with side constraints, by removing nodes and replacing them by new arcs. This aggregation could increase the number of edges in the graph, but the LP relaxation of the problem in the aggregated graph is stronger than the original one.
Local-search heuristics are another promising technique for very large IP problems. Given a feasible solution (or incumbent), this technique concentrates on finding other feasible solutions which are similar (within some appropriately defined neighborhood) and which have an improved objective function value. For example, in the Local Branching method by Fischetti and Lodi [93], branching decisions are made in a heuristic manner so that, after each branching decision, one of the resulting nodes only contains solutions similar (in hamming distance) to the incumbent. In the RINS heuristic by Dana et al. [72], at every node of the branch and bound tree, variables whose values coincide with the incumbent are fixed. Then, the sub-MIP resulting from optimizing over the remaining variables is solved.
To our knowledge, there are no local-search based heuristics for general MILP problems that do not require solving original linear programming relaxation and branching. This is prohibitive for sufficiently large problems. We expect to understand which integer programming formulation structures are suitable for a local-search heuristic. This class of heuristics is important from the computational perspective, because they are ideal for a parallel computing environment, where different processors can explore diverse neighborhoods of an incumbent solution.
We propose to study these two families of heuristic for MILP problems, in order to understand which are the characteristics that should have a MILP problem to obtain positive results.
[1] T. Achtberg. Conflict analysis in mixed integer programming. Discrete Optimization, 4:4–20, 2007.
[2] T. Achterberg, T. Koch, and A. Martin. Branching rules revisited. Operations Research Letters, 33:42–54, 2005.
[3] P. Afshani, J. Barbay, and T. Chan. Instance-optimal geometric algorithms. Submitted, 2009.
[4] M. Aghassi and D. Bertsimas. Robust game theory. Mathematical Programming B, 107:231–273, 2006.
[5] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, New Jersey, 1993.
[6] F. Alizadeh and D. Goldfarb. Second order cone programming. Mathematical Programming B, 95:3–51, 2003.
[7] N. Alon, E. Fischer, I. Newman, and A. Shapira. A combinatorial characterization of the testable graph properties: It’s all about regularity. SIAM J. on Comput., 39(1):143–167, 2009.
[8] N. Alon and A. Shapira. Every monotone graph property is testable. In 37th Annual ACM Symposium on Theory of Computing STOC, pages 128–137, 2005.
[9] F. Alvarez, J. López, and H. Ramírez. Interior proximal algorithm with variable metric for second-order cone programming: Applications to structural optimization and classification by hyperplanes. Optimization Methods and Software, to appear, 2009.
[10] K. Andersen, Q. Louveaux, R. Weismantel, and L. A. Wolsey. Inequalities from two rows of a simplex tableau. In M. Fischetti and D. P. Williamson, editors, IPCO, volume 4513 of Lecture Notes in Computer Science, pages 1–15. Springer-Verlag, 2007.
[11] R. Andersen. A local algorithm for finding dense subgraphs. In 19th ACM-SIAM Symposium on Discrete Algorithms SODA, pages 1003–1009, 2008.
[12] R. Andersen and K. Chellapilla. Finding dense subgraphs with size bounds. In 6th International Workshop on Algorithms and Models for the Web-Graph, pages 25–37. Springer-Verlag, 2009.
[13] A. Andoni and R. Krauthgamer. The smoothed complexity of edit distance. In 35th International Colloquium on Automata, Languages and Programming ICALP, volume 5125 of Lecture Notes in Computer Science, pages 357–369. Springer-Verlag, 2008.
[14] A. Apostolico. The myriad virtues of subword trees. In Combinatorial Algorithms on Words, volume 12 of NATO Advanced Science Institutes, Series F, pages 85–96. Springer-Verlag, 1985.
[15] D. Applegate, R. E. Bixby, V. Chvátal, and W. Cook. TSP cuts which do not conform to the template paradigm. In Computational Combinatorial Optimization, Optimal or Provably Near-Optimal Solutions [based on a Spring School], pages 261–304, London, UK, 2001. Springer-Verlag GmbH.
[16] D. Applegate, R. E. Bixby, V. Chvatal, and W. Cook. Implementing the dantzig-fulkerson-johnson algorithm for large traveling salesman problems. Mathematical Programming, 97:91–153, 2003.
[17] D. L. Applegate, R. E. Bixby, V. Chvátal, W. Cook, D. G. Espinoza, M. Goycoolea, and K. Helsgaun. Certification of an optimal tsp tour through 85,900 cities. Operations Research Letters, 37(1):11–15, 2009.
[18] A. Armacost, C. Barnhart, and K. Ware. Composite variable formulations for express shipment service network design. Transportation Science, 36:1–20, 2002.
[19] W. B. Arthur. On designing economic agents that behave like human agents. Evolutionary Economy, 3:1–22, 1993.
[20] S. Arvind. Smoothed Analysis of Gaussian Elimination. PhD thesis, MIT, 2004.
[21] S. Arvind, D. Spielman, and S.-H. Teng. Smoothed analysis of the condition numbers and growth factors of matrices. CoRR, 2003.
[22] H. Attouch and M. Soueycatt. Augmented lagrangian and proximal alternating direction methods of multipliers in hilbert spaces. applications to games, pde’s and control. Pac. J. Optim., 5:17–37, 2009.
[23] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The non-stochastic multiarmed bandit problem. SIAM J. on Comput., 32:48–77, 2002.
[24] Y. Azar, K. Jain, and V. Mirrokni. (almost) optimal coordination mechanisms for unrelated machine scheduling. In Proceeding of SODA 2008, pages 323–332, 2008.
[25] E. Balas, S. Ceria, M. Dawande, F. Margot, and G. Pataki. Octaine: A new heuristic for pure 0-1 programs. Operations Research, 49:207–225, 2001.
[26] E. Balas and C. H. Martin. Pivot and complement - a heuristic for 0-1 programming. Management Science, 26:86–96, 1980.
[27] C. Banderier, R. Beier, and K. Mehlhorn. Smoothed analysis of three combinatorial problems. In 28th International Symposium in Mathematical Foundations of Computer Science MFCS, volume 2747 of Lecture Notes in Computer Science, pages 198–207. Springer-Verlag, 2003.
[28] J. Barbay and E. Chen. Adaptive planar convex hull algorithm for a set of convex hulls. In 20th Annual Canadian Conference on Computational Geometry CCCG, 2008.
[29] J. Barbay and C. Kenyon. Alternation and redundancy analysis of the intersection problem. ACM Trans, Algorithms, 4(1):1–18, 2008.
[30] J. Barbay and G. Navarro. Compressed representations of permutations, and applications. In 26th International Symposium on Theoretical Aspects of Computer Science STACS, volume 09001 of Dagstuhl Seminar Proceedings, pages 111–122. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), 2009.
[31] C. Barnhart, C. Hane, and P. Vance. Using branch-and-price-and-cut to solve origin-destination integer mulitcommodity flow problems. Operations Research, 48:318–326, 2000.
[32] C. Barnhart, E. Johnson, G. Nemhauser, M. Savelsbergh, , and P. Vance. Branch-and-price: column generation for solving huge integer programs. Operations Research, 46:316–329, 1998.
[33] N. Baumann and M. Skutella. Solving evacuation problems efficiently–earliest arrival flows with multiple sources. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 399–410, 2006.
[34] A. Beggs. On the convergence of reinforcement learning. Journal of Economic Theory, 122:1–36, 2005.
[35] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Mathematics of Operations Research, 23:769–805, 1998.
[36] A. Ben-Tal and A. Nemirovski. Robust solutions of linear programming problems contaminated with uncertain data. Mathematical Programming, 88:411–424, 2000.
[37] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization. MPS-SIAM Series on Optimization, Philadelphia, PA, 2001.
[38] J. F. Benders. Partitioning procedures for solving mixed variables programming problems. Numerische Mathematik, 4:238–252, 1962.
[39] H. Y. Benson and D. F. Shanno. An exact primal-dual penalty method approach to warmstarting interior-point methods for linear programming. Technical report, Drexel University, LeBow College of Business, 2006.
[40] J. L. Bentley and A. C. C. Yao. An almost optimal algorithm for unbounded searching. Information Processing Letters, 5(3):82–87, 1976.
[41] D. P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Belmont, 1993.
[42] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52:35–53, 2004.
[43] D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, Belmont, 1997.
[44] J. R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer-Verlag, 1997.
[45] R. E. Bixby, M. Fenelon, Z. Gu, E. Rothberg, and R. Wunderling. MIP: Theory and practice - closing the gap. In Proceedings of the 19th IFIP TC7 Conference on System Modelling and Optimization, pages 19–50, Deventer, The Netherlands, 2000. Kluwer, B.V.
[46] B. Bollobás. Modern Graph Theory. Graduate Text in Mathematics Series. Springer-Verlag, 1998.
[47] B. Bollobás, R. Kozma, and D. Miklós, editors. Handbook of Large-Scale Random Networks, volume 18 of Bolyai Society Mathematical Studies. Springer-Verlag, 2009.
[48] J. F. Bonnans and H. Ramirez. Perturbation analysis of second-order cone programming problems. Mathematical Programming B, 104:205–227, 2005.
[49] C. Borgs, J. Chayes, L. Lovász, V. Sos, B. Szegedy, and K. Vesztergombi. Graph limits and parameter testing. In 38th Annual ACM Symposium on Theory of Computing STOC, pages 261–270, 2006.
[50] R. D. Braun. Collaborative optimization: An architecture for large-scale distributed design. PhD thesis, Stanford University, 1996.
[51] G. Brown. Iterative solution of games by fictitious play. In T. C. Koopmans, editor, Activity Analysis of Production and Allocation, pages 374–376. Wiley, New York and Chapman & Hall, London, 1951.
[52] M. Brown and R. Tarjan. Design and analysis of a data structure for representing sorted lists. SIAM J. of Comput., 9(3):594–614, 1980.
[53] P. Bürgisser, F. Cucker, and M. Lotz. General formulas for the smoothed analysis of condition numbers. C. R. Acad. Sc. Paris, 343(2):145–150, 2006.
[54] P. Bürgisser, F. Cucker, and M. Lotz. Smoothed analysis of complex conic condition numbers. J. Math. Pures Appl., 86(4):293–309, 2006.
[55] M. Campi and S. Garatti. The exact feasibility of randomized solutions of robust convex programs. SIAM Journal on Optimization, 3(19):1211–1230, 2008.
[56] T. Chan. Optimal output-sensitive convex hull algorithms in two and three dimensions. Discrete & Computational Geometry, 16:361–368, 1996.
[57] T. Chan, J. Snoeyink, and C.-K. Yap. Primal dividing and dual pruning: Output-sensitive construction of four-dimensional polytopes and three-dimensional voronoi diagrams. Discrete & Computational Geometry, 18(4):433–454, 1997.
[58] W. Chang and T. Marr. Approximate string matching and local similarity. In 5th Annual Symposium in Combinatorial Pattern Matching CPM, volume 807 of Lecture Notes in Computer Science, pages 259–273. Springer-Verlag, 1994.
[59] G. Chen and M. Teboulle. A proximal-based decomposition method for convex minimization problems. Math. Programming A, 64:81–101, 1994.
[60] G. Christodoulou, E. Koutsoupias, and A. Nanavati. Coordination mechanisms. In Proceedings of ICALP 2004, pages 345–357, 2004.
[61] F. Claude and G. Navarro. A fast and compact web graph representation. In 14th International Symposium on String Processing and Information Retrieval SPIRE, volume 4726 of Lecture Notes in Computer Science, pages 118–129. Springer-Verlag, 2007.
[62] M. Colombo, J. Gondzio, and A. Grothey. A warm-start approach for large-scale stochastic linear programs. Technical Report MS-06-004, The University of Edinburgh, School of Mathematics, 2006.
[63] R. Cominetti and M. Courdurier. Coupling general penalty schemes for convex programming with the steepest descent method and the proximal point algorithm. SIAM J. Optim., pages 745–765, 2002.
[64] R. Cominetti, E. Melo, and S. Sorin. A payoff-based learning procedure and its application to traffic games. Games and Economic Behavior, to appear, 2008.
[65] W. Cook, S. Dash, R. Fukasawa, and M. Goycoolea. Numerically accurate gomory mixed-integer cuts. INFORMS Journal on Computing, page to appear, 2009.
[66] W. Cook, R. Kannan, and A. Schrijver. Chvátal closures for mixed integer programming problems. Mathematical Programming, 47:155–174, 1990.
[67] G. Cornuéjols and V. Borozan. Minimal valid inequalities for integer constraints. George Nemhauser Symposium, Atlanta, July 2007.
[68] G. Cornuéjols and F. Margot. On the facets of mixed integer programs with two integer variables and two constraints. Mathematical Programming, 120(2):429–456, 2009.
[69] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20:273–297, 1995.
[70] C. Crochemore and W. Rytter. Text algorithms. Oxford University Press, 1994.
[71] M. Crochemore, C. Hancart, and T. Lecroq. Algorithms on Strings. Cambridge University Press, 2007.
[72] E. Dana, E. Rothberg, and C. Le Pape. Exploring relaxation induced neighborhoods to improve mip solutions. Mathematical Programming, 102:71–90, 2005.
[73] G. B. Dantzig. Programming in a linear structure. Comptroller, USAF Washington D.C., 1948.
[74] G. B. Dantzig and P. Wolfe. Decomposition principle for linear programs. Operations Research, 8:101–111, 1960.
[75] E. Demaine, A. López-Ortiz, and I. Munro. Adaptive set intersections, unions, and differences. In 11th ACM-SIAM Symposium on Discrete Algorithms SODA, pages 743–752, 2000.
[76] E. Demaine, A. López-Ortiz, and I. Munro. Experiments on adaptive set intersections for text retrieval systems. In 3rd Workshop on Algorithm Engineering and Experiments, pages 5–6, 2001.
[77] A. V. DeMiguel. Two decomposition algorithms for nonconvex optimization problems with global variables. PhD thesis, Stanford University, 2001.
[78] S. S. Dey and L. A. Wolsey. Lifting integer variables in minimal inequalities corresponding to lattice-free triangles. In A. Lodi, A. Panconesi, and G. Rinaldi, editors, IPCO 2008, volume 5035 of Lecture Notes in Computer Science, pages 463–475, 2008.
[79] H. E. Dixon and M. L. Ginsberg. Combining satisfiability techniques from ai and or. Knowl. Eng. Rev., 15(1):31–45, 2000.
[80] A. Dontchev and R. Rockafellar. Characterizations of lipschitzian stability in nonlinear programming. In Mathematical programming with data perturbations, pages 65–82. Lecture Notes in Pure and Appl. Math., 195, Dekker, New York, 1998.
[81] F. F. Dragan and M. Matamala. Navigating in a graph by aid of its spanning tree. In ISAAC, pages 788–799, 2008.
[82] C. Dürr and N. Thang. Nash equilibria in voronoi games on graphs. In 15th Annual European Symposium ESA, volume 4698 of Lecture Notes in Computer Science, pages 17–28. Springer-Verlag, 2007.
[83] R. Durrett. Random Graph Dynamics. Number 20 in Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2006.
[84] A. Dvorestzky. A theorem on convex bodies and applications to banach spaces. Proceedings of the National Academy of Science, 45:223–226, 1959.
[85] J. Eckstein and D. P. Bertsekas. On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Programming, 55:293–318, 1992.
[86] I. Erev and A. E. Roth. Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. American Economic Review, 88:848–881, 1998.
[87] D. Espinoza. Computing with multi-row gomory cuts. In A. Lodi, A. Panconesi, and G. Rinaldi, editors, IPCO 2008, volume 5035 of Lecture Notes in Computer Science, pages 214–224, 2008.
[88] D. G. Espinoza, R. Garcia, M. Goycoolea, G. L. Nemhauser, and M. W. P. Savelsbergh. Per-seat, on-demand air transpor tation par t i: Problem description and an integer multicommodity flow model. Transportation Science, 42:263–278, 2008.
[89] V. Estivill-Castro and D. Wood. A survey of adaptive sorting algorithms. ACM Computing Surveys, 24(4):441–476, 1992.
[90] P. Ferragina, F. Luccio, G. Manzini, and S. Muthukrishnan. Compressing and indexing labeled trees, with applications. J. of the ACM, 2009. to appear.
[91] E. Fischer. The art of uninformed decisions: A primer to property testing. Bulletin of the EATCS, 75, 2001.
[92] M. Fischetti, F. Glover, and A. Lodi. The feasibility pump. Mathematical Programming, 104:91–104, 2005.
[93] M. Fischetti and A. Lodi. Local branching. Mathematical Programming, 98:23–47, 2003.
[94] L. Fleischer. Universally maximum flow with piece-wise constant capacity functions. Networks, 38:1–11, 2001.
[95] L. Fleischer and M. Skutella. Quickest flows over time. SIAM Journal on Computing, 36:1600–1630, 2007.
[96] L. R. Ford and D. R. Fulkerson. Constructing maximal dynamic flows from static flows. Operations Research, 6:1073–1082, 1958.
[97] D. Foster and R. V. Vohra. Asymptotic calibration. Biometrika, 85:379–390, 1998.
[98] R. M. Freund. Complexity of convex optimization using geometry-based measures and a reference point. Mathematical Programming 99, pages 197–221, 2004.
[99] R. M. Freund and J. Vera. On condition based complexity of conic linear optimization via the ellipsoidal algorithm. SIAM Journal on Optimization, 10:155–176, 1999.
[100] R. M. Freund and J. Vera. Equivalence of convex problem geometry and computational complexity in the separation oracle model. Mathematics of Operations Research, page to appear, 2009.
[101] Y. Freund and R. E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999.
[102] D. Fudenberg and D. K. Levine. The Theory of Learning in Games. MIT Press, Cambridge, Massachussetts, 1998.
[103] D. Gake, H. Kuhn, and A. W. Tucker. Linear programming and the theory of games. In T. C. Koopmans, editor, Activity Analysis of Production and Allocation. Wiley, New York and Chapman & Hall, London, 1951.
[104] D. Gale. Transient flows in networks. Michigan Math J., 6:59–63, 1959.
[105] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 42:1115–1145, 1995.
[106] D. Goldfarb and G. Iyengar. Robust portfolio selection problems. Mathematics of Operations Research, 28:1–38, 2003.
[107] O. Goldreich. Combinatorial property testing (a survey). Technical Report 56, Electronic Colloquium on Computational Complexity ECCC, 1997.
[108] O. Goldreich. Property testing in massive graphs, pages 123–147. Kluwer Academic Publishers, 2002.
[109] O. Goldreich. Contemplations on testing graph properties. In A. Czumaj, S. Muthu Muthukrishnan, R. Rubinfeld, and C. Sohler, editors, Sublinear Time Algorithms, number 05291 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), 2005.
[110] R. E. Gomory. An algorithm for the mixed integer problem. Technical Report RM-2597, RAND Corporation, 1960.
[111] R. E. Gomory. Corner polyhedra and two-equation cutting planes. George Nemhauser Symposium, Atlanta, July 2007.
[112] R. E. Gomory and E. L. Johnson. Some continuous functions related to corner polyhedra, part I. Mathematical Programming, 3:23–85, 1972.
[113] J. Gondzio and A. Grothey. A new unblocking technique to warmstart interior point methods based on sensitivity analysis. Technical Report MS-06-005, The University of Edinburgh, School of Mathematics, 2006.
[114] M. Groetschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Springer-Verlag, Berlin, 1998.
[115] R. Guimerŕ, A. Díaz-Guilera, F. Vega-Redondo, A. Cabrales, and A. Arenas. Optimal network topologies for local search with congestion. Phys. Rev. Lett., 89(24):248701.1–248701.4, Nov 2002.
[116] D. Gusfield. Algorithms on strings, trees and sequences. Cambridge University Press, 1997.
[117] H. Hämäläinen, I. Honkala, S. Litsyn, and P. Östergřard. Football pools: A game for mathematicians. American Mathematical Monthly, 102:579–588, 1995.
[118] J. Hannan. Approximation to bayes risk in repeated plays. In M. Dresher, A. W. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume 3, pages 97–139. Princeton University Press, 1957.
[119] P. Hansen, N. Mladenovic, and D. Urosevic. Variable neighborhood search and local branching. Computers and Operations Research, 33:3034–3045, 2006.
[120] S. Hart. Adaptive heuristics. Econometrica, 73:1401–1430, 2002.
[121] B. Heydenreich, R. Müller, and M. Uetz. Games and mechanism design in machine scheduling-an introduction. Production and Operations Management, 16:437–454, 2007.
[122] F. S. Hillier. Efficient heuristic procedures for integer linear programming with an interior. Operations Research, 17:600–637, 1969.
[123] J. B. Hiriart-Urruty and C. Lémarechal. Convex Analysis and Minimization Algorithms. Springer-Verlag, Berlin, 1993.
[124] J. Hofbauer and W. H. Sandholm. On the global convergence of stochastic fictitious play. Econometrica, 70:2265–2294, 2002.
[125] B. Hoppe and Éva Tardos. Polynomial time algorithms for some evacuation problems. In SODA ’94: Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms, pages 433–441, Philadelphia, PA, USA, 1994. Society for Industrial and Applied Mathematics.
[126] ILOG. CPLEX 10.1. User’s Manual. ILOG SA, Gentilly, France, 2006.
[127] N. Immorlica, L. Li, V. Mirrokni, and A. S. Schulz. Coordination mechanisms for selfish scheduling. Theoretical Computer Science, 410:1589–1598, 2009.
[128] A. N. Iusem, B. Svaiter, and M. Teboulle. Entropy-like proximal methods in convex programming. Math. Oper. Res., 19:790–814, 1994.
[129] P. Jaillet, J. Bard, L. Huang, and M. Dror. Delivery cost approximations for inventory routing problems in a rolling horizon framework. Transportation Science, 36:292–300, 2002.
[130] S. Janson, T. Luczak, and A. Rucinski. Random Graphs. Wiley-Interscience Series in Discrete Mathematics and Optimization, 2000.
[131] P. Jehiel, B. Moldovanu, and E. Stacchetti. Multidimensional mechanism design for auctions with externalities. Journal of Economic Theory, 85:258–293, 1999.
[132] R. Kannan and V. Vinay. Analyzing the structure of large graphs. Manuscript, 1999.
[133] L. V. Kantorovich. Mathematical methods of organizing and planning production. Publication House of the Leningrad State University, 1939.
[134] L. V. Kantorovich. On the translocation of masses. Comptes Rendus de l’Académie des Sciences de l’U.R.S.S., 37:199–201, 1942.
[135] F.-P. Kelly. Mathematical modeling of the internet, in mathematics unlimited - 2001 and beyond, eds, b, engquist and w, schmid, springer-verlag. Berlin, pages 685–702, 2001.
[136] D. Kirkpatrick and R. Seidel. The ultimate planar convex hull algorithm. SIAM J. Comput., 15(1):287–299, 1986.
[137] M. Kiwi, F. Magniez, and M. Santha. Exact and approximate testing/correcting of algebraic functions: A survey, volume 2292 of Lecture Notes in Computer Science, pages 30–83. Springer-Verlag, 2002.
[138] M. Kiwi, G. Navarro, and C. Telha. On-line approximate string matching with bounded errors. In 19th Annual Symposium on Combinatorial Pattern Matching CPM, volume 5029 of Lecture Notes in Computer Science, pages 130–142. Springer-Verlag, 2008.
[139] K. C. Kiwiel. Proximal minimization methods with generalized bregman functions. SIAM J. Control Optim., 35:1142–1168, 1997.
[140] P. Klemperer and M. Meyer. Supply function equilibria in oligopoly under uncertainty. Econometrica, 57:1243–1277, 1989.
[141] E. Koutsoupias and C. Papadimitriou. Worst-case equilibria. In Proceedings of SATCS 1999, pages 404–413, 1999.
[142] R. Kulkarni and R. Mohanty. Multilocation plant sizing and timing problem: A study of spatial-temporal decomposition procedure. Production planning and control, 7:471–481, 1996.
[143] J. F. Laslier, R. Topol, and B. Walliser. A behavioral learning process in games. Games and Economic Behavior, 37:340–366, 2001.
[144] C. Lemaréchal and C. Sagastizábal. Variable metric bundle methods: from conceptual to implementable forms. Mathematical Programming, 76:393–410, 1997.
[145] Y. Li and J.-P. P. Richard. Cook, kannan and schrijver’s example revisited. Discrete Optimization, 5(4):724–734, 2008.
[146] J. Linderoth, F. Margot, and G. Thain. Improving bounds on the footvall pool problem via symetry reduction and high-throughput computing. submited, 2007.
[147] N. Linial. Locality in distributed graph algorithms. SIAM J. Comput., 21(1):193–201, 1992.
[148] M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming. Linear Algebra and its Applications, 284:193–228, 1998.
[149] A. Lokketangen and F. Glover. Solving zero-one mixed integer programming problems using tabu search. European Journal of Operational Research, 106:624–658, 1998.
[150] E. Lucas. La tour d’hanoď, véritable casse-tęte annamite, 1883. In a puzzle game., Amiens, Jeu rapporté du Tonkin par le professeur N.Claus (De Siam).
[151] J. Luedtke and S. Ahmed. A sample approximation approach for optimization with probabilistic constraints. SIAM Journal on Optimization, 19:674–699, 2008.
[152] O. L. Mangasarian and E. W. Wild. Proximal support vector machine classifiers. In Proceedings KDD-2001: Knowledge Discovery and Data Mining, pages 77–86, 2001.
[153] B. Manthey and R. Reischuk. Smoothed analysis of binary search trees. In 16th International Symposium in Algorithms and Computation ISAAC, volume 3827 of Lecture Notes in Computer Science, pages 483–492. Springer-Verlag, 2005.
[154] F. Margot. Testing cut generatos for mixed-integer linear programming. Mathematical Programming Computation, 1:69–95, 2009.
[155] B. Martinet. Regularisation d’inequations variationelles par approximations successives. Revue Française d’Informatique et de Recherche Opérationelle, 4:154–159, 1970.
[156] A. Mas-Colell, M. Whinston, and J. Green. Microeconomic Theory. Oxford University Press, New York, 1995.
[157] M. Mavronicolas, B. Monien, V. Papadopoulou, and F. Schoppmann. Voronoi games on cycle graphs. Springer-Verlag, 5162:503–514, 2008.
[158] R. Mendelssohn. Improved bounds for aggregated linear programs. Operations Research, 28:1450–1453, 1980.
[159] E. Minieka. Maximal lexicographic, and dynamic network flows. Operations Research, 21:517–527, 1973.
[160] G. Minty. Monotone (nonlinear) operators in hilbert space. Duke Math. J., 29:341–346, 1962.
[161] J. J. Moreau. Proprietés des applications prox. CRAS, 256:1069–1071, 1963.
[162] I. Munro and R. Raman. Succinct representation of balanced parentheses and static trees. SIAM J. on Comput., 31(3):762–776, 2001.
[163] I. Munro and P. Spira. Sorting and searching in multisets. SIAM J. on Comput., 5(1):1–8, 1976.
[164] R. Myerson. Optimal auction design. Mathematics of Operations Research, 6:58–73, 1981.
[165] D. Naddef. Polyhedral theory and branch-and-cut algorithms for the symmetric traveling salesman problem. Combinatorial Optimization, 12:29–116, 2002.
[166] J. S. Nath and C. Bhattacharyya. Maximum margin classifiers with specified false positive and false negative error rates. In Proceedings of the Seventh SIAM International Conference on Data Mining, April 26-28, 2007.
[167] G. Navarro and V. Mäkinen. Compressed full-text indexes. ACM Computing Surveys, 39(1), 2007. Article 2, 61 pages.
[168] G. Navarro and M. Raffinot. Flexible pattern matching in strings – Practical on-line search algorithms for texts and biological sequences. Cambridge University Press, 2002.
[169] Z. Neeman. Property rights and efficiency of voluntary bargaining under asymmetric information. Review of Economic Studies, 66:679–691, 1999.
[170] A. Nemirovski and A. Shapiro. Convex approximations of chance constrained programs. SIAM Journal of Optimization, 4(17):969–996, 2006.
[171] Y. Nesterov and A. Nemirovskii. Interior-Point Polinomial Algorithms in Convex Programming. SIAM, Philadelphia, 1994.
[172] Y. Nesterov and M. Todd. On the riemannian geometry defined by self-concordant barriers and interior-point methods. Foundations of Computational Mathematics, 2:333–361, 2002.
[173] A. Newman and M. Kuchta. Using aggregation to optimize long-term production planning at an underground mine. European Journal of Operational Research, 176:1205–1218, 2007.
[174] T. Nguyen, J. Strodiot, and V. Nguyen. A bundle method for solving equilibrium problems. Math. Program. B, 116:529–552, 2009.
[175] N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani. Algorithmic game theory. Cambridge University Press, 2007.
[176] J. Outrata and D. Sun. On the coderivative of the projection operator onto the second-order cone. Set-Valued Analysis, 16:999–1014, 2008.
[177] B. K. Pagnoncelli, S. Ahmed, and A. Shapiro. Computational study of a chance constrained portfolio selection problem. Journal of Optimization Theory and Applications, 2(142):399–416, 2009.
[178] D. Peleg. Distributed Computing: A Locality-Sensitive Approach. SIAM, Philadelphia, 2000.
[179] W. Powell, A. Ruszczynski, and H. Topaloglu. Learning algorithms for separable approximations of stochastic optimization problems. Mathematics of Operations Research, 29:814–836, 2004.
[180] G. Reinelt. TSPLIB - a traveling salesman library. ORSA Journal on Computing, 3:376–384, 1991.
[181] J. Renegar. Some perturbation theory for linear programming. Mathematical Programming, 65:73–91, 1994.
[182] J. Robinson. An iterative method of solving a game. Annals of Mathematics, 54:296–301, 1951.
[183] J. C. Rochet and P. Chone. Ironing, sweeping and multidimensional screening. Econometrica, 66:783–826, 1998.
[184] R. T. Rockafellar. Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res., 1:97–116, 1976.
[185] D. Rogers, R. Plante, R. Wong, and J. Evans. Aggregation and disaggregation techniques and methodology in optimization. Operations Research, 39:553–582, 1991.
[186] D. Ron. Property testing, volume 9 of Combinatorial Optimization Series, pages 597–649. Springer-Verlag, 2001.
[187] D. Ron. Property testing: A learning theory perspective. Foundations and Trends in Machine Learning, 1(3):307–402, 2008.
[188] C. Roos, T. Terlaki, and J. P. Vial. Interior Point Methods for Linear Optimization. Springer-Verlag, Berlin, 2006.
[189] R. Rubinfeld. Sublinear time algorithms. In Proceedings of the International Congress of Mathematics, volume 3. AMS, 2006.
[190] R. M. Saltzman and F. S. Hillier. A heuristic ceiling point algorithm for general integer linear-programming. Management Science, 38:263–283, 1992.
[191] E. Sen and N. Gupta. Distribution-sensitive algorithms. Nordic J. Comput., 6(2):194–211, 1999.
[192] A. Shapiro, D. Dentcheva, and A. Rusczyński. Lectures in Stochastic Programming: modeling and theory. MPS-SIAM series on optimization, Philadelphia, PA, 2009.
[193] J. Shapiro. Mathematical programming: Structures and algorithms. Wiley, New York, 1979.
[194] J. Shapiro. A note on node aggregation and bender’s decomposition. Mathematical Programming, 29:113–119, 1984.
[195] J. Shapiro and W. Powell. A metastrategy for dynamic resource management problems based on informational decomposition. Informs Journal on Computing, 18:43–60, 2006.
[196] P. K. Shivaswamy, C. Bhattacharyya, and A. J. Smola. Second order cone programming approaches for handling missing and uncertain data. Journal of Machine Learning Research, 7:1283–1314, 2006.
[197] Y. Shoham and K. Leyton-Brown. Multiagent systems: algorithmic, game-theoretic, and logical fundations. Cambridge University Press, 2009.
[198] D. Sleator and R. Tarjan. Self-adjusting binary search trees. J. ACM, 32(3):652–686, 1985.
[199] D. Spielman. Spectral graph theory and its applications. In 48th Annual IEEE Symposium on Foundations of Computer Science FOCS, pages 29–38, 2007.
[200] D. Spielman and S.-H. Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. In 33rd Annual ACM Symposium on Theory of Computing STOC, pages 296–305. ACM, 2001.
[201] D. Spielman and S.-H. Teng. Smoothed analysis (motivation and discrete models). In 8th International Workshop on Algorithms and Data Structures WADS, volume 2748 of Lecture Notes in Computer Science, pages 256–270. Springer-Verlag, 2003.
[202] S. Storoy. Weights improvement in column aggregation. European Journal of Operational Research, 73:510–516, 1994.
[203] S. Storoy. Optimal weights and degeneracy in variable aggregated linear programs. Operations Research Letters, 19:29–31, 1996.
[204] J. F. Sturm. Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software, 11:625–653, 1999.
[205] D. Sun. The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their applications. Mathematics of Operations Research, 31:761–776, 2006.
[206] K. Tammer. The application of parametric optimization and imbedding for the foundation and realization of a generalized primal decomposition approach. In J. Guddat, H. Jongen, B. Kummer, and F. Nozicka, editors, Parametric Optimization and Related Topics, volume 35 of Mathematical Research. Akademie-Verlag, Berlin, 1987.
[207] M. J. Todd. A lower bound on the number of iterations of primal-dual interior-point methods for linear programming. Technical Report 1050, Cornell University, Ithaca NY 14853-3801, 1993.
[208] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38:49–95, 1996.
[209] J. von Neumann. Über ein ökonomisches gleichungssystem und eine verallgemeinerung des brouwerschen fixpunktsatzes. Ergebnisse eines mathematischen Kolloquiums, 8:73–87, 1937.
[210] J. von Neumann. Discussion of a maximum problem. Institute for Advanced Study, Princeton, N.J., 1947.
[211] L. Walras. Éléments d’économie politique pure ou théorie de la richesse sociale. L. Corbaz & Cie, Lausanne, 1874.
[212] J. G. Wardrop. Some theoretical aspects of road trafic research. In Proceedings of the Institute of Civil Engineers, volume Part II, pages 325–378, 1952.
[213] R. Wenger. Randomized quickhull. Algorithmica, 17(3):322–329, 1997.
[214] W. Wilkinson. An algorithm for universal maximal dynamic flows in a network. Operations Research, 19:1602–1612, 1971.
[215] H. Wolkowicz, R. Saigal, and L. Vandenberghe. Handbook of Semidefinite Programming: Theory, Algorithms and Applications. Springer-Verlag, Berlin, 2000.
[216] M. H. Xu. Proximal alternating directions method for structured variational inequalities. J. Optim. Theory Appl., 134:107–117, 2007.
[217] E. A. Yildirim and M. J. Todd. Sensitivity analysis in linear programming and semidefinite programming using interior-point methods. Mathematical Programming, 90:229–261, 2001.
[218] E. A. Yildirim and S. J. Wright. Warm-start strategies in interior-point methods for linear programming. SIAM Journal on Optimization, 12:782–810, 2002.
[219] H. P. Young. Strategic Learning and its Limits. Oxford University Press, 2004.
[220] P. H. Zipkin. Bounds for aggregating nodes in network problems. Mathematical Programming, 19:155–177, 1980.
[221] P. H. Zipkin. Bounds on the effect of aggregating variables in linear programs. Operations Research, 28:403–418, 1980.
[222] P. H. Zipkin. Aggregation and disaggregation in convex network problems. Networks, 12:101–117, 1982.