AFHN ImpreciseSorting.0115

Sorting and Selection with Imprecise Comparisons∗
Miklós Ajtai
IBM Research - Almaden
[email protected]
Vitaly Feldman
IBM Research - Almaden
[email protected]
Avinatan Hassidim†
Bar Ilan University, Israel
[email protected],
Jelani Nelson‡
Harvard University
[email protected]
January 13, 2015
Abstract
We consider a simple model of imprecise comparisons: there exists some δ > 0 such that
when a subject is given two elements to compare, if the values of those elements (as perceived
by the subject) differ by at least δ, then the comparison will be made correctly; when the two
elements have values that are within δ, the outcome of the comparison is unpredictable. This
model is inspired by both imprecision in human judgment of values and also by bounded but
potentially adversarial errors in the outcomes of sporting tournaments.
Our model is closely related to a number of models commonly considered in the psychophysics
literature where δ corresponds to the just noticeable difference unit (JND) or difference threshold.
In experimental psychology, the method of paired comparisons was proposed as a means for
ranking
preferences amongst n elements of a human subject. The method requires performing
all n2 comparisons, then sorting elements according to the number of wins. The large number
of comparisons is performed to counter the potentially faulty decision-making of the human
subject, who acts as an imprecise comparator.
We show that in our model the method of paired comparisons has optimal accuracy, minimizing the errors
introduced by the imprecise comparisons. However, it is also wasteful, as
it requires all n2 . We show that the same optimal guarantees can be achieved using 4n3/2
comparisons, and we prove the optimality of our method. We then explore the general tradeoff between the guarantees on the error that can be made and number of comparisons for the
problems of sorting, max-finding, and selection. Our results provide strong lower bounds and
close-to-optimal solutions for each of these problems.
∗
A preliminary version of this work containing weaker forms of some of the results has appeared in the proceedings
of 36th International Colloquium on Automata, Languages and Programming (ICALP) 2009.
†
Part of this work was done while the author was at Google Research, Israel.
‡
Supported by NSF grant CCF-0832797. Part of this work was done while the author was at IBM Research Almaden.
1
1
Introduction
Let x1 , . . . , xn be n elements where each xi has an unknown value val(xi ). We want to find the
element with the maximum value using only pairwise comparisons. However, the outcomes of
comparisons are imprecise in the following sense. For some fixed δ > 0, if |val(xi ) − val(xj )| ≤ δ,
then the result of the comparison can be either “≥” or “≤”. Otherwise, the result of the comparison
is correct. It is easy to see that in such a setting it might be impossible to find the true maximum
(for example when the values of all the elements are within δ). It might however be possible to
identify an approximate maximum, that is an element xi∗ such that for all xi , val(xi )−val(xi∗ ) ≤ kδ
for some, preferably small, value k. In addition, our goal is to minimize the number of comparisons
performed to find xi∗ . We refer to the minimum value k such that an algorithm’s output is always
guaranteed to be kδ-close to the maximum as the error of the algorithm in this setting. Similarly,
to sort the above elements with error k we need to find a permutation π such that if π(i) < π(j)
then val(xi ) − val(xj ) ≤ kδ.
A key issue that our work addresses is that in any sorting (or max-finding) algorithm, errors
resulting from imprecise comparisons might accumulate, causing the final output to have high error.
Consider, for example, applying the classical bubble sort algorithm to a list of elements that are
originally sorted in the reverse order and where the difference between two adjacent elements is
exactly δ. All the comparisons will be between elements within δ and therefore, in the worst case, the
order will not be modified by the sorting, yielding error (n−1)δ. Numerous other known algorithms
that primarily optimize the number of comparisons can be easily shown
to incur a relatively high
error. As can be easily demonstrated (Theorem 3.1), performing all n2 comparisons then sorting
elements according to the number of wins, a “round-robin tournament”, achieveserror k = 2,
which is lowest possible (Theorem 3.2). A natural question we ask here is whether n2 comparisons
are necessary to achieve the same error. We explore the same question for all values of k in the
problems of sorting, max-finding, and general selection.
One motivation for studying this problem comes from social sciences. A common problem both
in experimental psychology and sociology is to have a human subject rank preferences amongst
many candidate options. It also occurs frequently in marketing research [27, Chapter 10], and
in training information retrieval algorithms using human evaluators [1, Section 2.2]. The basic
method to elicit preferences is to present the subject two alternatives at a time and ask which is
the preferred one. The common approach to this problem today was presented by Thurstone as
early as 1927, and is called the “method of paired comparisons” (see [9] for a thorough treatment).
In this method, one asks the subject to give preferences for all pairwise comparisons amongst n
elements. A ranked preference list is then determined by the number of “wins” each candidate
element receives. A central concept in these studies introduced as far back as the 1800s by Weber
and Fechner is that of the just noticeable difference (JND) unit or difference threshold ∆. If two
physical stimuli with intensities x ≤ y have y ≤ x + ∆x, a human will not be able to reliably
distinguish which intensity is greater. The idea was later generalized by Thurstone to having
humans not only compare physical stimuli, but also abstract concepts [28]. By the Weber-Fechner
law, stimuli with intensities x and y cannot be distinguished when 1/(1+∆) ≤ x/y ≤ 1+∆. This is
equivalent to saying that intensities are indistinguishable when the absolute difference between the
2
natural logarithms of the intensities is less than δ = ln(1+∆). Therefore JND in the Weber-Fechner
law corresponds to the imprecision δ of our model of comparisons when intensities are measured
on the logarithmic scale. More generally, we can always assume that the intensities are measured
on the scale for which JND corresponds to an absolute difference between values.
Most previous work on the method of paired comparisons has been through the lens of statistics.
In such work the JND is modeled as a random variable and the statistical properties of Thurstone’s
method are studied [9]. Our problem corresponds to a simplified model of this problem which does
not require any statistical assumptions and is primarily from a combinatorial perspective.
Another context that captures the intuition of our model is that of designing a sporting tournament based on win/lose games. There, biases of a judge and unpredictable events can change
the outcome of a game when the strengths of the players are close. Hence one cannot necessarily
assume that the outcome is truly random in such a close call. It is clear that both restricting the
influence of the faulty outcomes and reducing the total number of games required are important in
this scenario, and hence exploring the tradeoff between the two is of interest. For convenience, in
the rest of the paper we often use the terminology borrowed from sporting tournaments.
Accordingly, the problems we consider have a natural interpretation as problems on a tournament graph (that is, a complete directed graph with only one edge between any two vertices). We
can view all the comparisons that were revealed by the comparator as a digraph G. The vertices of
G are the n elements and it contains the directed edge (xi , xj ) if and only if a comparison between
xi and xj has been made, and the comparator has responded with “xi ≥ xj ”. At any point in time
the comparison graph is a subgraph of some unknown tournament graph. The problem of finding
a maximum element with error k is then equivalent to finding a vertex in such a graph from which
there exists a directed path of length at most k to any other vertex while minimizing the number
of edges which need to be revealed (or “matches” in the context of tournaments). Such vertex is
referred to as a k-king (or just king for k = 2) [19, 21]. Existence and properties of such elements
for various tournament graphs have been studied in many contexts. Sorting with error k gives an
ordering of vertices such that if vertex xi occurs after xj in the order then there exists a directed
path of length at most k from xi to xj . The connection to tournament graphs is made explicit in
Section 5.
Finally, in a number of theoretical contexts responses are given by an imprecise oracle. For
example, for weak oracles given by Lovász in the context of optimization [20] and for the statistical
query oracle in learning [18] the answer of the oracle is undefined when some underlying value z is
within a certain small range of the decision boundary. When z itself is the difference of two other
values, say z1 and z2 , then oracle’s answer is, in a way, an imprecise comparison of z1 and z2 . This
correspondence together with our error 2 sorting algorithm was used by one of the authors to derive
algorithms in the context of evolvability [11].
1.1
Our results
We first examine the simpler problem of finding only the maximum element. For this problem, we
give a deterministic max-finding algorithm with error 2 using 2n3/2 comparisons. This contrasts
with the method of paired comparisons, which makes (n2 − n)/2 comparisons to achieve the same
error. Using our algorithm recursively, we build deterministic algorithms with error k that require
3
k−2
k
O(n1+1/(3·2 −1) ) comparisons. We also give a lower bound of Ω(n1+1/(2 −1) ) comparisons for the
problem. The bounds are almost tight — the upper bound for our error k algorithm is less than
our lower bound for error (k − 1) algorithms. We also give a linear-time randomized algorithm that
achieves error 3 with probability at least 1 − 1/n2 , showing that randomization greatly changes the
complexity of the problem.
We then study the problems of selecting an element of a certain order and sorting. For k = 2,
we give a deterministic algorithm that sorts using 4n3/2 comparisons (and in particular can be used
for selecting an element of any order). For general k, we show that selection of an element of any
k−1
order i can be achieved using O(2k · n1+1/2 ) comparisons and sorting with error k can performed
k−1
using O(4k · n1+1/2 ) comparisons.
k−1
We give a lower bound of Ω(n1+1/2 ) comparisons for sorting with error k. When k = O(1) our
bounds are tight up to a constant factor and are at most a log n factor off for general k. Our lower
bounds for selection depend on the order of the element that needs to be selected and interpolate
between the lower bounds for max-finding and the lower bounds for sorting. For k ≥ 3, our lower
bound for finding the median (and also for sorting) is strictly larger than our upper bound for
max-finding. For example, for k = 3 the lower bound for sorting is Ω(n5/4 ), whereas max-finding
requires only O(n6/5 ) comparisons.
Note that we achieve log log n error for max-finding in O(n) comparisons, and log log n error
for sorting in O(n log2 n) comparisons. Standard methods using the same number (up to a log n
factor) of comparisons (e.g. a single-elimination tournament tree, or Mergesort) can be shown to
incur error at least log n. Also, all the algorithms we give are efficient in that their running times
are of the same order as the number of comparisons they make.
The basis of our deterministic algorithms for both max-finding and selection are efficient algorithms for a small value of k (k = 2). The algorithms for larger error k use several different
ways to partition elements, then recursively use algorithms for smaller error and then combine
results. Achieving nearly tight results for max-finding requires in part relaxing the problem to that
of finding a small k-max-set, or a set which is guaranteed to contain at least one element of value
at least x∗ − kδ, where x∗ is the maximum value of an element (we interchangeably use x∗ to refer
to an element of maximum value as well). It turns out we can find a k-max-set in a fewer number
of comparisons than the lower bound for error-k max-finding algorithms. Exploiting this allows
us to develop an efficient recursive max-finding algorithm. We note a similar approach of finding
a small set of “good” elements was used by Borgstrom and Kosaraju [7] in the context of noisy
binary search.
To obtain our lower bounds for deterministic algorithms we show that the problems we consider
have equivalent formulations on tournament graphs in which the goal is to ensure existence of
short (directed) paths from a certain node to other nodes. Using a comparison oracle that always
prefers elements that had fewer wins in previous rounds, we obtain bounds on the minimum of
edges that are required to create the paths of desired length. Such bounds are then translated
back into bounds on the number of comparisons required to achieve specific error guarantees for
the problems we consider.
For our randomized max-finding algorithm, we use a type of tournament with random seeds
at each level, in combination with random sampling at each level of the tournament tree. By
4
Task
Find maximum
Select i-th
Sort
k=2
2n3/2
4n3/2
4n3/2
Upper bounds
k
k = log log n
k−2 −1)
1+1/(3·2
O(n
)
O(n)
k−1
k
1+1/2
O(2 · n
)
O(n log n)
k−1
k
1+1/2
O(4 · n
)
O(n log2 n)
Lower bound
k
Ω(n1+1/(2 −1) )
k
k−1
Ω(i · max{i1/(2 −1) , n1/(2 ) })
k−1
Ω(n1+1/2 )
Table 1: Overview of the bounds for deterministic algorithms with error k. In the selection task i-th smallest
element is chosen (with maximum being n-th smallest).
performing a round-robin tournament on the top few tournament players together with the sampled
elements, we obtain an element of value at least x∗ − 3δ with polynomially small error probability.
1.2
Related Work
Handling noise in binary search procedures was first considered by Rényi [24] and by Ulam [29]. An
algorithm for solving Ulam-Rényi’s game was proposed by Rivest et al. [25], where an adversarial
comparator can err a bounded number of times. They gave an algorithm with query complexity
O(log n) which succeeds if the number of adversarial errors is constant.
Yao and Yao [31] introduced the problem of sorting and of finding the maximal element in a
sorting network when each comparison gate either returns the right answer or does not work at all.
For finding the maximal element, they showed that it is necessary and sufficient to use (e+1)(n−1)
comparators when e comparators can be faulty. Ravikumar, Ganesan and Lakshmanan extended
the model to arbitrary errors, showing that O(en) comparisons are necessary and sufficient [23].
For sorting, Yao and Yao showed that O(n log n+en) gates are sufficient. In a different fault model,
and with a different definition of a successful sort, Finocchi and Italiano [13] showed an O(n log n)
time algorithm resilient to (n log n)1/3 faults. An improved algorithm handling (n log n)1/2 faults
was later given by Finocchi, Grandoni and Italiano [12].
In the model where each comparison is incorrect with some probability p, Feige et al. [10] and
Assaf and Upfal [3] give algorithms for several comparison problems, and [4, 17] give algorithms for
binary search. We refer the reader interested in the rich history and models of faulty comparison
problems to a survey of Pelc [22] and a monograph of Cicalese [8].
We point out that some of the bounds we obtain appear similar to those known for max-finding,
selection, and sorting in parallel in Valiant’s model [30]. In particular, our bounds for max-finding
are close to those obtained by Valiant for the parallel analogue of the problem (with the error used
k
in place of parallel time) [30], and our lower bound of Ω(n1+1/(2 −1) ) for max-finding with error k
is identical to a lower (and upper) bound given by Häggkvist and Hell [16] for merging two sorted
arrays each of length n using a k-round parallel algorithm. Despite these similarities in bounds,
our techniques are different, and we are not aware of any deep connections. As some evidence of
the difference between the problems we note that for sorting in k parallel rounds it is known that
Ω(n1+1/k ) comparisons are required [2, 6, 15], whereas in our model, for constant k, we can sort
Θ(k)
with error k in n1+1/2
comparisons. For a survey on parallel sorting algorithms, the reader is
referred to [14].
5
The authors have recently learned that finding a king and sorting kings in a tournament graph
while minimizing the number of uncovered edges was previously studied by Shen, Sheng and Wu [26].
As follows from our results, this problem is equivalent to max-finding and sorting by a deterministic
algorithm with error 2. Their upper and lower bounds for this case are (asymptotically) identical
to our bounds and are based on essentially the same techniques. Our results can be seen as
a generalization of their results to k-kings for all k ≥ 2. We also remark that for randomized
algorithms our problem is no longer equivalent to the problem considered in [26].
2
Notation
Throughout this document we let x∗ denote some xi of the maximum value (if there are several
such elements, we choose one arbitrarily). Furthermore, we use xi interchangeably to refer to the
both the ith element and its value, e.g. xi > xj should be interpreted as val(xi ) > val(xj ).
We assume δ = 1 without loss of generality, since the problem with arbitrary δ > 0 is equivalent
to the problem with δ = 1 and input values xi /δ. We stress that the algorithm does not know δ.
We say x defeats y when the comparator claims that x is larger than y (and we similarly use
the phrase y loses to x). Note that x defeats y implies x ≥ y − 1. We do not necessarily assume
that repeating the same comparison several times would give the same result, and our algorithms
do not repeat a comparison twice. We say x is k-greater than y (x ≥k y) if x ≥ y − k. The term
k-smaller is defined analogously. A set of elements T is k-greater than a set of elements S if for
every y ∈ S and every x ∈ T , x ≥k y. We say an element is a k-max of a set if it is k-greater than
all other elements in the set. If the set is not specified explicitly then we refer to the set of all input
elements. A permutation xπ(1) , . . . , xπ(n) is k-sorted if xπ(i) ≥k xπ(j) for every i > j. A k-max-set
is a subset of all elements which contains at least one element of value at least x∗ − k.
All logarithms throughout this document are base-2. For simplicity of presentation, we occasionally omit floors and ceilings and ignore rounding errors when they have an insignificant effect
on the bounds.
3
Max-Finding
In this section we give deterministic and randomized algorithms for max-finding.
3.1
Deterministic Algorithms
We start by showing that the method of paired comparisons provides an optimal error guarantee,
not just for max-finding, but also for sorting.
Theorem 3.1 Sorting according to the number of wins in a round-robin tournament has error at
most 2.
Proof. Let x, y be arbitrary elements with y strictly less than x − 2. For any z that y defeats, x
also defeats z. Furthermore, x defeats y, and thus x has strictly more wins than y, implying y is
placed lower in the sorted order.
6
Algorithm 2-MaxFind: // Returns an element of value at least x∗ −2. The value
√
s > 1 is a parameter which is by default d ne when not specified.
1. Label all xi as candidates.
2. while there are more than s candidate elements:
(a) Pick an arbitrary subset of s of the candidate elements and play them in a
round-robin tournament. Let x be the element with the largest number of wins.
(b) Compare x against all candidate elements and eliminate all elements that lose
to x.
3. Play the remaining (at most s) candidate elements in a round-robin tournament and
return the element with the largest number of wins.
Figure 1: The algorithm 2-MaxFind for finding a 2-max.
Theorem 3.2 No deterministic max-finding algorithm has error less than 2.
Proof. Given three elements a, b, c, the comparator can claim a > b > c > a, making the elements
indistinguishable. Without loss of generality, suppose A outputs a. Then the values could be a = 0,
b = 1, c = 2, implying A has error 2.
In Figure 1 we give our error 2 algorithm for max-finding.
Lemma 3.3 For every s ≤ n, the max-finding algorithm 2-MaxFind has error 2 and makes at most
(n − s)s + (n2 − s2 )/(s − 1) comparisons. In particular, the number of comparisons is at most 2n3/2
√
for s = d ne.
th
Proof. We first
analyze the number of comparisons. In the t iteration, the number of comparisons
s
is at most 2 + (nt − s), where nt is the number of candidate elements in round t. We now bound
the number of iterations and nt . In all but
the last iteration, the total number of comparisons
made in the round-robin tournament is 2s = s(s − 1)/2. Thus by an averaging argument, the
element which won the largest number of comparisons won at least (s − 1)/2 times. Thus, at least
(s − 1)/2 elements are eliminated in each iteration, implying the number of iterations is at most
2(n − s)/(s − 1) and nt ≤ n − t(s − 1)/2. The total number of comparisons is thus at most
X
[s(s − 1)/2 + n − t(s − 1)/2] ≤ (n − s)s + (n2 − s2 )/(s − 1) .
t≤2(n−s)/(s−1)
We now analyze error in two cases. The first case is that x∗ is never eliminated, and thus x∗
participates in Step 3. Theorem 3.1 then ensures that the final output is of value at least x∗ − 2.
Otherwise, consider the iteration when x∗ is eliminated. In this iteration, it must be the case that
7
the x chosen in Step 2(b) has x ≥ x∗ − 1, and thus any element with value less than x∗ − 2 was
also eliminated in this iteration. In this case all future iterations only contain elements of value at
least x∗ − 2, and so again the final output has value at least x∗ − 2.
The key recursion step of our general error max-finding algorithm is the algorithm 1-Cover
(given in Lemma 3.5) which is based on 2-MaxFind and the following lemma.
Lemma 3.4 There is a deterministic algorithm which makes n2 comparisons and outputs a 1max-set of size at most dlog ne.
Proof. We build the output set in a greedy manner. Initialize S = ∅. At each step consider
the subtournament on the vertices T defined to be those vertices neither in S nor defeated by an
element of S. An averaging argument shows there exists an element in T which wins at least half
its matches in this subtournament; add this element to S. Note in the next step |T | decreases by
a factor of at least 2, so there are at most dlog ne iterations. Furthermore, at least one element in
the final set S must either have value x∗ or have defeated x∗ , and thus S is a 1-max-set.
√
We now obtain 1-Cover by setting s = d2 ne in Figure 1, then returning the union of the x that
were chosen in any iteration of Step 2(a), in addition to the output of Lemma 3.4 on the elements
in the final tournament in Step 3.
Lemma 3.5 There is an algorithm 1-Cover making at most 3 · n3/2 comparisons which finds a
√
1-max-set of size at most n (for n ≥ 81).
√
Proof. Run the algorithm 2-MaxFind with s = d2 ne. Return the set consisting of all elements that
won the round-robin tournament in Step 2(b) of Figure 1 in at least one iteration, in addition to a
size-dlog se set 1-greater than the candidate elements which were left in Step 3 (using Lemma 3.4).
√
√
The total size of the returned set is thus dn/se − 1 + dlog se ≤ d n/2e + dlog(d2 ne)e − 1. For
√
n ≥ 81, this is at most n.
To show correctness, consider the element x∗ of maximal value. Either x∗ was eliminated in
Step 2(c) of some iteration, in which case the element x that eliminated x∗ had value at least
x∗ − 1, or x∗ survived until the last iteration, in which case the set constructed via Lemma 3.4 is a
1-max-set. Finally, note that the number of comparisons is the same as the number of comparisons
used by 2-MaxFind (with the same s) and therefore is less than 3n3/2 .
We are now ready to give our main algorithm for finding a k-max, shown in Figure 2.
k−2 −1)
Theorem 3.6 For every 3 ≤ k ≤ log log n, k-MaxFind uses O(n1+1/(3·2
finds a k-max.
) comparisons and
k
k
Proof. We prove that for any 2 ≤ k ≤ log log n, k-MaxFind uses at most 54 · n3·2 /(3·2 −4) =
k−2
54 · n1+1/(3·2 −1) comparisons and finds a k-max by induction. First, by Lemma 3.3, it holds for
2-MaxFind.
We now prove the bound on the error. Let x be the element returned by k-MaxFind. By the
inductive hypothesis, for every y ∈ ∪ti=1 Ti , x ≥k−1 y. In addition, for every input element xj there
exists y ∈ Ti for some i such that y ≥1 xj . Therefore, x ≥k xj for every j ∈ [n].
The total number of comparisons used by k-MaxFind can be bounded as follows.
8
Algorithm k-MaxFind: // Returns a k-max for k ≥ 3
1. If n ≤ 81 return the output of 2-MaxFind on the input elements.
8
2. Equipartition the n elements into sets S1 , . . . , St each of size r = max{81, 4 · n 3·2k −4 }
3. Call 1-Cover on each set Si to recover a set Ti 1-greater than Si .
4. Return (k − 1)-MaxFind(∪ti=1 Ti ) // Recursion stops at 2-MaxFind.
Figure 2: The algorithm k-MaxFind for finding a k-max.
• if n ≤ 81 then 4 · n3/2 ≤ 36 · n. Otherwise,
√
• t = n/r invocations of 1-Cover. By Lemma 3.5, this requires at most 3 · r3/2 · n/r = 3 r · n
3·2k
3·2k
comparisons which equals max{27 · n, 6 · n 3·2k −4 }. Note that n 3·2k −4 ≥ n and therefore we
3·2k
can use 27 · n 3·2k −4 as an upper bound.
• The invocation of (k − 1)-MaxFind on ∪ti=1 Ti . By Lemma 3.5, the size of each Ti is at most
3·2k −8
√
√
√
r. Therefore, | ∪ti=1 Ti | = r · n/r = n/ r ≤ n 3·2k −4 /2. By, the inductive assumption this
invocation requires at most
1+1/(3·2k−3 −1)
k−1
3·2k −8
3·2k −8
3·2k
· 3·2
k −4
3·2
54 · n
/2
≤ 27 · n 3·2k −4 3·2k−1 −4 = 27 · n 3·2k −4 .
3·2k
3·2k
Altogether the number of comparisons is at most max{36 · n, 54 · n 3·2k −4 } = 54 · n 3·2k −4 .
Corollary 3.7 There is a max-finding algorithm using O(n) comparisons with error of at most
log log n.
3.2
Randomized Max-Finding
We now show that randomization can significantly reduce the number of comparisons required
to find an approximate maximum. Our algorithm operates correctly even if the adversary can
adaptively choose how to err when two elements are close (though we stress that the adversary
may not change input values over the course of an execution). In particular, the classic randomized
selection algorithm can take quadratic time in this adversarial model since for an input with all
equal values, the adversary can claim that the randomly chosen pivot is smaller than all other
elements. Nevertheless, even in this strong adversarial model, we show the following.
9
Algorithm SampledTournament:
// For constant c and n sufficiently large,
returns a 1-max-set with probability at least 1 − n−c .
1. Initialize N0 ← {x1 , . . . , xn }, W ← ∅, s ← 15(c + 2) + 1, C ← 44 /(5e)5 , and i ← 0,
where e is Euler’s number.
2. if |Ni | ≤ ((c + 1)/C)n1/3 ln n, add elements of Ni to W and return W .
3. else add a random subset of ((c + 1)/C)n1/3 ln n elements from Ni to W .
4. Randomly partition the elements in Ni into sets of size s. In each set, perform a
round-robin tournament.
5. Let Ni+1 contain all elements of Ni which had strictly fewer than (s − 2)/4 losses in
their round-robin tournament in Step 4. Increment i and goto Step 2.
Figure 3: The algorithm SampledTournament.
Theorem 3.8 For any integer c ≥ 1, there exists a randomized algorithm which given any n
elements finds a 3-max of the set with probability at least 1 − n−c using at most (s − 1)n + ((c +
1)/(C ln 2))2 /2 · n2/3 ln4 n = O(n) comparisons, where s, C are as defined in Figure 3.
Taking c > 1, and using the fact that the error of our algorithm can never be more than n − 1, this
gives an algorithm which finds an element with expected value at least x∗ − 4. The high-level idea
of the algorithm is as follows. We randomly equipartition the elements into constant-sized sets. In
each set we play a round-robin tournament and advance everyone who won more than 3/4 of its
comparisons. As we will prove, the element with the median number of wins can win at most 3/4
of its comparisons and hence no more than half of the elements advance. We also randomly sample
a set of elements at each level of the tournament tree. We show that either (1) at some round
of the tournament there is an abundance of elements with value at least x∗ − 1, in which case at
least one such element is sampled with high probability, or (2) x∗ advances as one of the top few
tournament elements with high probability. Figure 3 presents the subroutine SampledTournament
for the algorithm.
We now proceed to the analysis of our algorithm. First we show that the element with the
median number of wins (or the element of order dn/2e when sorted in increasing order by number
of wins) must incur a significant number of losses. We in fact show that it must also incur a
significant number of wins, but we will not need this latter fact until presenting our selection and
sorting algorithms.
Lemma 3.9 In a round-robin tournament on n elements, the element with the median number of
wins has at least m wins and at least m losses for m = d(dn/2e − 1)/2e ≥ dn/5e.
Proof. Sort the elements by their total number of wins and let x0 be the median according to the
number of wins. Let ` denote the number of wins of x0 . Assume that n is even. Then the total
10
since there are n/2
number of wins for all the elements is at most n/2 · ` + n/2 · (n − 1) − n/2
2
elements with at most ` wins and the total number
of wins by the n/2 elements that have more
wins than x0 is at most n/2 · (n − 1) − n/2
.
But
the
total number of wins is exactly n2 and
2
therefore we obtain that ` · n/2 ≥ n/2
2 , or ` ≥ (n − 2)/4. The bound on the number of losses is
obtained in the same way. If n is odd then this argument gives a bound of (dn/2e − 1)/2.
Lemma 3.10 Let s = 15(c + 2) + 1, C be as in Figure 3, and let W be the final set output in
Figure 3. Then for any integer choice of c ≥ 1:
• |W | ≤ (c + 1)/(C ln 2) · n1/3 ln2 n.
• W is a 1-max-set with probability at least 1 − n−c .
• The algorithm SampledTournament makes at most (s − 1)n comparisons.
Proof. By Lemma 3.9 the element with the median number of wins has at least (s − 2)/4 losses
during each round-robin tournament in Step 4, and thus the fraction of elements that survive from
one iteration to the next in Step 5 is at most 1/2. Therefore, the number of iterations is at most
dlog2 ne. In each iteration we sample ((c + 1)/C) · n1/3 ln n elements, P
and thus
of the output
the size
∞
s
i
W is as claimed. Also, the total number of comparisons is at most i=0 2 · (n/2 )/s = (s − 1)n,
where n/2i is an upper bound on |Ni |.
We now show that W is a 1-max-set with probability at least 1 − n−c . We say that an iteration
i is good if either x∗ advances to Ni+1 , or W contains a 1-max at the end of round i. We then show
that for any iteration i, conditioned on the event that iterations 1, . . . , i − 1 were good we have
that i is good with probability 1 − 1/nc+1 . The lemma would then follow by a union bound over
all iterations i.
Now consider an iteration i where 1, . . . , i − 1 were good. Then either W already contains a
1-max, in which case i is good, or W does not contain a 1-max but x∗ ∈ Ni . Let us focus on this
latter case. Define ni = |Ni |. Let Qi be the event that the number of elements in Ni with value at
least x∗ − 1 is at least Cαni for α = n−1/3 . We show a dichotomy: either Qi holds, in which case
1-max is sampled into W with probability 1 − 1/nc+1 , or Qi does not hold, in which case x∗ ∈ Ni+1
with probability 1 − nc+1 .
To show the dichotomy, let us first assume Qi holds. Then, the probability we do not sample a
1-max into W is at most
Cαni
1−
ni
((c+1)/C)n1/3 ln n
1/3
C ((c+1)/C)n ln n
1
≤ 1 − 1/3
≤ e−(c+1) ln n = c+1 .
n
n
Now let us assume that Qi does not hold. Then for x∗ to not advance to Ni+1 we must
have that at least s/5 1-maxes were placed into the same set as x∗ in iteration i. Using that
11
(a/b)b ≤
a
b
≤ (ea/b)b , this happens with probability at most
Cαni
ni
Cαni
ni
s/5 · 4s/5
s/5 · 4s/5
< ni ·
ni
ni
s−1
s
≤ ni ·
es
· (5Cαni /s)s/5 · (5Cni /(4s))4s/5
(ni /s)s
≤ ni · (e · C 1/5 · 51/5 · (5/4)4/5 · n−1/15 )s
= n−s/15+1 ,
which is at most n−(c+1) by our choice of s and C.
Proof (of Theorem 3.8). Run the algorithm in Figure 3. By Lemma 3.10, the output W is 1max-set with probability at least 1 − n−c . Conditioned on this event, a 2-max of W is thus a
3-max of the entire original
input. A 2-max of W can be found via a round-robin tournament by
|W |
Theorem 3.1 using 2 < |W |2 /2 comparisons. The total number of comparisons is thus the sum
of comparisons made in Figure 3, and in the final round robin tournament, which gives the bound
claimed in the theorem statement.
4
Sorting and Selection
We now consider the problems of sorting and selection. We first present an algorithm 2-Sort which
sorts with error 2 using O(n3/2 ) comparisons (and, in particular can be used for selection with error
2). We then describe the selection and sorting algorithms for general error k. We start by formally
defining what is meant by selecting an element of certain order with error.
Definition 4.1 Element xj in the set X = {x1 , . . . , xn } is of k-order i if there exists a partition
S1 , S2 of X \ {xj } such that |S1 | = i − 1, and S1 ∪ {xj } ≤k S2 ∪ {xj }. A k-median is an element
of k-order dn/2e.
Our error 2 sorting algorithm is based on modifying 2-MaxFind so that the x found in Step
2(a) of Figure 1 is used as a pivot. We then compare this x against all elements and pivot into
two sets, recursively sort each, then concatenate. More formally, the algorithm 2-Sort works as
follows. If n √
≤ 64 then we just perform a round-robin tournament on the elements. Otherwise let
s = s(n) = 2n. We choose some s elements and perform a round-robin tournament on them.
Now let x be an element with the median number of wins. We compare x to all elements and let
S1 be the set of elements that lost to x and S2 be the set of all elements that defeated x. Then
we recursively sort S1 and S2 then output sorted S1 , then x, and then sorted S2 . Note that any
(y, y 0 ) ∈ (S1 ∪ {x}) × S2 satisfies y 0 ≥2 y since y 0 ≥1 x and x ≥1 y. Correctness follows by induction
from Theorem 3.1. We prove the following bound on the number of comparisons used by 2-Sort.
Theorem 4.2 There is a deterministic sorting algorithm 2-Sort with error 2 that requires at most
4 · n3/2 comparisons.
12
Proof. Let g(n) be the worst-case number of comparisons required by 2-Sort that we described
3/2
above. We
64 we perform a round-robin tournament and thus
claim g(n) ≤ 4 · n . For n ≤ 3/2
n
g(n) = 2 in this case, which is less than 4 · n for n ≤ 64. Now we consider n > 64. Let t = |S2 |.
The number of comparisons used in a recursive call is at most s(s − 1)/2 + n − s + g(n − t − 1) + g(t).
By our inductive assumption this is at most s(s − 1)/2 + n − s + 4((n − t − 1)3/2 + t3/2 ). Without
loss of generality we can assume that t ≤ (n − 1)/2 and then obtain that this function is maximized
when t is the smallest possible (via a straightforward analysis of the derivative). By Lemma 3.9, x
has at least (s − 2)/4 wins and (s − 2)/4 losses and therefore t ≥ (s − 2)/4. Now we observe that
√
(n − t − 1)3/2 ≤ n3/2 − 32 n(t + 1) (to verify it is sufficient to square both sides and use the fact
that t ≤ n). Hence
3√
g(n) ≤ s2 /2 + n − 3s/2 + 4(n3/2 −
n(t + 1) + t3/2 )
2√
√
√
3 n( 2n + 2)
3/2
≤ 4 · n + 2n − 2 2n −
+ n3/4
2
√
√ √
= 4 · n3/2 − (3/ 2 − 2)n − (3 + 3/ 2) n + n3/4 < 4 · n3/2 .
The last line of the equation follows from the following application of the inequality of arithmetic
and geometric means
q √
√
√ √
√
(3/ 2 − 2)n + (3 + 3/ 2) n ≥ 2 (3/ 2 − 2)(3 + 3/ 2)n3/4 > n3/4 .
This finishes the proof of the induction step.
At a high level our algorithm for k-order selection is similar to the classical selection algorithm
of Blum et al. [5], in that in each step we try to find a pivot that allows us to recurse on a problem
of geometrically decreasing size. In our scenario though, a good pivot must have an additional
property which we now define.
Definition 4.3 Element xj in the set X = {x1 , . . . , xn } is a k-pivot for m elements if there exist
disjoint sets Sw ⊂ X \ {xj } (winning set) and Sl ⊂ X \ {xj } (losing set), such that |Sw | = |Sl | = m,
x` ≤k xj for all ` ∈ Sl , and x` ≥k xj for all ` ∈ Sw .
In order to use an element as a pivot in our algorithm it must be a (k − 1)-pivot. We construct
such a pivot via a recursive algorithm that given n elements and a number k constructs a k-pivot
for at least n/(5 · 2k−1 ) elements. For k = 1, Lemma 3.9 gives the desired algorithm. The general
algorithm is effectively a recursive application of Lemma 3.9 and is described below. Our algorithm
for selecting an element of k-order i is in Figure 5.
k−1
We claim that algorithm k-Select finds an element of k-order i using at most O(2k · n1+1/2 )
comparisons. To prove this, we first analyze the algorithm k-Pivot.
Lemma 4.4 For any 1 ≤ k ≤ log log n, given n elements the deterministic algorithm k-Pivot (see
Figure 4) finds a k-pivot for m ≥ n/(5 · 2k−1 ) elements and corresponding losing set Sl and winning
k
set Sw using at most 9 · n1+1/(2 −1) + cn comparisons, where cn = min{ n2 , 215
2 }.
13
Algorithm k-Pivot: // Given a set of n ≥ 3 elements X, returns a k-pivot
for m ≥ n/(5 · 2k−1 ) elements and corresponding losing and winning sets (as
described in Definition 4.3).
1. if k = 1 or n ≤ 215 // Base case
(a) Perform a round-robin tournament on X.
(b) Let y be the element with the median number of wins.
(c) Let m be the smaller of the number of wins and the number of losses of y.
(d) Let Sl be a set of m elements that lost to y and Sw be a set of m elements that
defeated y.
2. else
l
1 m
(a) Set s ← 3 · n 2k −1 and set s0 ← d(ds/2e − 1)/2e.
(b) Initialize T ← X, i ← 0.
(c) while |T | ≥ s
i ← i + 1.
Let Xi be a set of s arbitrary elements from T .
Perform a round-robin tournament on Xi .
Set yi to be the element with the median number of wins.
Set Si,l to be a set of any s0 elements that lost to y and Si,w a set of s0
elements that defeated yi .
vi. Update T ← T \ (Si,l ∪ Si,w ∪ {yi }).
i.
ii.
iii.
iv.
v.
(d) Set t ← i and Y ← {y1 , y2 , . . . , yt }.
(e) Recursively call (k − 1)-Pivot on Y and let y, Sl0 and Sw0 be the (k − 1)-pivot
and the sets returned.
S
S
(f) Set Sl ← Sl0 ∪ yi ∈S 0 Si,l and Sw ← Sw0 ∪ yi ∈Sw0 Si,w .
l
3. Return y, Sl and Sw .
Figure 4: The algorithm k-Pivot for finding a k-pivot for at least n/(5 · 2k−1 ) elements
14
Algorithm k-Select: // Given a set X of n ≥ 3 elements, returns an element of
k-order i in X
1. if k ≤ 2 or n ≤ 8, sort elements using 2-Sort then return the element with index i.
2. else
k
j
1−k
and let T be a set of any s elements from X.
(a) Set s ← n1−2
(b) Call (k − 1)-Pivot on T and let y, Sl and Sw be the pivot and the sets returned.
(c) Compare y with each of the other n − 1 elements.
(d) if y defeats at least (n − 1)/2 elements
i. Set X1 to be the set of all elements that y defeats and are not in Sw .
ii. Set X2 = X \ (X1 ∪ {y}).
(e) else // the symmetric case
i. Set X2 to be the set of all elements that y lost to and are not in Sl .
ii. Set X1 = X \ (X2 ∪ {y}).
(f) if |X1 | = i − 1 return y.
(g) else if i ≤ |X1 |, return the output of k-Select for an element of k-order i in X1 .
(h) else return the output of k-Select for an element of k-order (i − |X1 | − 1) in X2 .
Figure 5: The algorithm k-Select.
15
k
Proof. We prove that for any 1 ≤ k ≤ log log n, k-Pivot uses at most 9 · n1+1/(2 −1) + cn
comparisons and finds a k-pivot for m ≥ n/(5 · 2k−1 ) elements by induction. First, if k = 1
or n ≤ 215 then by Lemma 3.9, it holds for 1-Pivot since m ≥ n/5 and the total number of
k
comparisons n(n − 1)/2 ≤ 9 · n1+1/(2 −1) + cn .
We now prove the bound on the error in the general case when k ≥ 2 and n ≥ 216. Let y be
the element returned by k-Pivot. By the inductive hypothesis, for every v ∈ Sl , if v ∈ Sl0 then
v ≤k−1 y. Otherwise, when v ∈ Si,l for some yi ∈ Sl0 we get that v ≤1 yi and yi ≤k−1 y. This
implies that in both cases v ≤k y. Similarly, for every v ∈ Sw , v ≥k y.
Next we prove the bound on m. The algorithm performs a round-robin on s elements until at
most s − 1 elements are left and each such round eliminates exactly 2s0 + 1 elements. Therefore the
number of rounds t is at least ≥ d(n − s + 1)/(2s0 + 1)e. By the inductive hypothesis,
m0 = |Sl0 | = |Sw0 | ≥ t/(5 · 2k−2 ) ≥
Now,
m = m0 · (s0 + 1) ≥
n−s+1
.
5 · 2k−2 · (2s0 + 1)
n − s + 1 s0 + 1
n
n − 2(s − 1)(s0 + 1)
·
=
+
.
0
k−2
k−1
2s + 1 5 · 2
5·2
5 · 2k−1 · (2s0 + 1)
0 +1) ≥ 0. Otherwise (for s ≥ 20), k ≥ 2 implies that
If s ≤
19 then
n ≥ 216 implies that n−2(s−1)(s
1/3
3
s≤ 3·n
and therefore, n ≥ ((s − 1)/3) . By definition of s0 , s0 = d(ds/2e − 1)/2e ≤ (s + 1)/4.
Therefore, for s ≥ 20,
s − 1 3 s2 + 4s + 5
0
n − 2(s − 1)(s + 1) ≥
−
> 0.
3
2
Hence m = |Sl | = |Sw | ≥ 5·2nk−1 .
Finally, we prove the bound on the total number of comparisons used by k-Pivot when k ≥ 2
and n ≥ 216 as follows:
• t ≤ bn/(2s0 + 1)c invocations of round-robin on s elements. By definition 2s0 + 1 ≥ s/2 and
k
therefore this step requires at most b2n/sc s(s − 1)/2 ≤ n(s − 1) ≤ 3 · n1+1/(2 −1) comparisons.
• The invocation of (k − 1)-Pivot on t ≤ b2n/sc elements. By our inductive hypothesis, this
k−1
k
k−1
k
requires 9t1+1/(2 −1) + ct ≤ 9(2 · n1−1/(2 −1) /3)1+1/(2 −1) + cn ≤ 6 · n1+1/(2 −1) + cn .
k −1)
Altogether the number of comparisons is at most 9 · n1+1/(2
+ cn , as was claimed.
We are now ready to state and prove our bounds for k-Select formally.
Theorem 4.5 For any 2 ≤ k ≤ log log n, there is a deterministic algorithm k-Select which, given
1−k
1−k
n elements and i ∈ [n], finds an element
of k-order i using at most 25·2k−1 n1+2 +5·22k−3 n2 cn
comparisons, where cn = min{ n2 , 215
2 } (as defined in Lemma 4.4).
Proof. We prove the claim by induction on n. For n ≤ 8 we use 2-Sort which sorts with error 2.
An element i in such a sorting has 2-order i. Also, according to Theorem 4.2, the algorithm uses
1−k
at most 4n3/2 ≤ 25 · 2k−1 · n1+2
comparisons.
16
We now consider the general case when n ≥ 9 and k ≥ 3. If y defeats at least (n − 1)/2 elements
then for every element z1 ∈ X1 , z1 ≤1 y and, by the properties of the (k −1)-pivot for every element
z2 in X2 , y ≤k−1 z2 . In particular, z1 ≤k z2 . Now, if |X1 | = i − 1 then X1 and X2 form a partition
of X \ {y} showing that y is an element of k-order i. If |X1 | > i − 1 then let y 0 be the k-order i
element in X1 returned by the recursive call to k-Select. There exists a partition of X1 into sets
S10 and S20 showing that y 0 is an element of k-order i in X1 . We set S1 = S10 and S2 = S20 ∪ {y} ∪ X2 .
First, by the definition of y 0 , for every z1 ∈ S1 ∪ {y 0 }, and z20 ∈ S20 ∪ {y 0 } we have z1 ≤k z20 . Now,
by our choice of y we know that for z1 ∈ S1 ∪ {y 0 } and z2 ∈ X2 ∪ {y} we have z1 ≤k z2 . Hence
S1 ∪ {y 0 } ≤k S2 ∪ {y 0 }, that is, y 0 is an element of k-order i. The case when |X1 | < i − 1 and the
symmetric case when y lost to at least (n − 1)/2 are analogous. This proves that k-Select returns
an element of k-order i.
Finally, in the general case, the number of comparisons k-Select uses is as follows.
j
k
√
1−k
• Call to (k − 1)-Pivot on s = n1−2
≥ b nc ≥ 3 elements. Lemma 4.4 implies that this
step uses at most 9s1+1/(2
k−1 −1)
+ cs ≤ 9n + cn comparisons.
• Comparison of the pivot with all the elements uses at most n − 1 comparisons.
• The invocation of k-Select on one of X1 and X2 . By the definition of pivot, |Sw | = |Sl | ≤
(s − 1)/2 ≤ (n − 2)/4. Hence we observe that if y defeated at least (n − 1)/2 elements then
|X1 | ≥ (n−1)/2−|Sw | ≥ (n−2)/4 ≥ |Sl |. And by the definition of X2 , |X2 | ≥ |Sw |. Similarly,
if y lost to at least (n − 1)/2 elements then |X1 | ≥ |Sl | and |X2 | ≥ |Sw |. Assume, without loss
of generality, that |X1 | ≥ |X2 | and let α = (|X2 |+1)/n. By the properties of the (k −1)-pivot,
j
k
1−k
1−k
|X2 | ≥ s/(5 · 2k−2 ) ≥ n1−2
/(5 · 2k−2 ) ≥ n1−2 /(5 · 2k−2 ) − 1,
1−k
or α ≥ n−2 /(5·2k−2 ). The number of comparisons is maximized when k-Select is executed
on X1 which has size n − |X2 | − 1 = n − αn and, by our inductive hypothesis, this step can
be done using N comparisons for
1−k
N ≤ 25 · 2k−1 (n − αn)1+2
1−k
≤25 · 2k−1 · n1+2
≤25 · 2k−1 · n
1+21−k
1−k
=25 · 2k−1 · n1+2
(1 − α)
1−k
+ 5 · 22k−3 · (n − αn)2
1+21−k
1−k
+ 5 · 22k−3 · n2
(1 − α) + 5 · 22k−3 · n
1−k
+ 5 · 22k−3 · n2
21−k
· cn
1−k
· (1 − α)2
· cn
· (1 − 21−k · α) · cn
cn − 10 · n − cn
1−k
Therefore altogether the number of comparisons is at most 25 · 2k−1 · n1+2
+ 5 · 22k−3 · n2
1−k
· cn .
We now show that with a small change our selection algorithm can be used to produce a complete
sorting with error k. Namely, instead of running recursively on one of the subsets X1 and X2 , we
recursively sort each partition, then concatenate the sorted results in the order k-Sort(X1 ), y,kSort(X2 ). As expected, in the base case (when n ≤ 8) we just output the result of 2-Sort. We
17
claim that the resulting algorithm, to which we refer to as k-Sort, has the following bounds on the
number of comparisons.
Theorem 4.6 For any 2 ≤ k ≤ log log n, there is a deterministic algorithm k-Sort which given a
1−k
set X of n elements, sorts the elements
of X with error k and uses at most 7 · 22k · n1+2
+ n · cn
n
215
comparisons, where cn = min{ 2 , 2 } (as defined in Lemma 4.4).
Proof. As before, we prove the claim by induction on n. For n ≤ 8 we use 2-Sort which produces
sorting with error 2 and gives a suitable bound on the number of comparisons.
We now consider the general case (n ≥ 9 and k ≥ 3). As in the case of selection, it is easy to
see that the algorithm sorts X with error k. However the bound on the number of comparisons
is different from the one we gave for k-Select since now the algorithm
j
kis called recursively on
1−k
1−2
both X1 and X2 . As before, the call to (k − 1)-Pivot on s = n
elements uses at most
k−1
9s1+1/(2 −1) + cn ≤ 9n + cn comparisons and there are at most n − 1 comparisons of the pivot
with all the other elements. We again assume, without loss of generality, that |X1 | ≥ |X2 | and
denote α = (|X2 | + 1)/n. By our inductive hypothesis, the number of comparisons N used for the
recursive calls is bounded by
1−k
1−k
+ (αn)1+2
+ cn ((n − αn) + (αn − 1))
N ≤ 7 · 22k (n − αn)1+2
1−k
1−k
1−k
= 7 · 22k · n1+2
(1 − α)1+2
+ α1+2
+ (n − 1)cn .
(1)
1−k
1−k
By differentiating the expression (1 − α)1+2
+ α1+2
as a function of α we obtain that it is
monotonically decreasing in the interval [0, 1/2] and hence its minimum is attained when α is the
1−k
smallest. Therefore we can use the lower bound α ≥ n−2 /(5 · 2k−2 ) to conclude that
1−k
(1 − α)1+2
1−k
+ α1+2
1−k
≤ (1 − α)(1 − α)2
+ α ≤ (1 − α)(1 − 21−k · α) + α
9
1−k
· 21−k n−2 /(5 · 2k−2 )
= 1 − (1 − α) · 21−k α ≤ 1 −
10
9
1−k
· 22−2k n−2 .
=1−
25
By substituting this bound into equation (1) we obtain that
9 2−2k −21−k
1−k
2k
1+21−k
N ≤7·2 ·n
1− 2
n
+ (n − 1)cn = 7 · 22k · n1+2
+ n · cn − 10n − cn .
25
1−k
Therefore altogether the number of comparisons used by k-Sort is at most 7 · 22k · n1+2
+ n · cn .
An immediate corollary of Theorem 4.6 is that it is possible to achieve error of no more than
log log n in close to optimal time.
Corollary 4.7 There is a sorting algorithm using O(n log2 n) comparisons with error of at most
log log n.
18
5
Lower Bounds
Here we prove lower bounds against deterministic max-finding, sorting, and selection algorithms.
In particular, we show that Theorem 3.6, Theorem 4.5 and Theorem 4.6 achieve almost optimal
trade-off between error and number of comparisons.
Our proof is based on the analysis of the comparison graph, or the directed graph on all elements
in which an edge (xi , xj ) is present whenever a comparison between xi and xj was made and its
imprecise outcome was “xi ≥ xj ”. We show that one can only conclude that xi ≥k xj if this
graph has a path of length at most k from xi to xj . The existence of short paths from an element
to numerous other elements (such as when the element is a k-max) is only possible when there
are many vertices with large out-degree. Following this intuition we define an oracle that when
comparing two elements always responds that the one with the smaller out-degree is larger than
the one with the larger out-degree. Such an oracle will ensure that a large number of comparisons
needs to be made in order to obtain a sufficient number of vertices with high out-degree. We also
show that the responses of the oracle can be seen as derived from actual values defined using the
resulting comparison graph.
Lemma 5.1 Suppose a deterministic algorithm A upon given n elements guarantees that after
m comparisons it can list r elements, each of which is guaranteed to be k-greater than at least q
k
k−1
elements. Then m = Ω(max{q 1+1/(2 −1) , q · r1/(2 ) }).
Proof. To create a worst case input we first define a strategy for the comparator and later choose
values for the elements which are consistent with the given answers, while maximizing the error of
the algorithm.
Let Gt be the comparison graph at time t. That is, Gt is a digraph whose vertices are the xi and
which contains the directed edge (xi , xj ) if and only if before time t a comparison between xi and
xj has been made, and the comparator has responded with “xi ≥ xj ”. We denote the out-degree
of xi in Gt by dt (xi ). Assume that at time t the algorithm wants to compare some xi and xj . If
dt (xi ) ≥ dt (xj ) then the comparator responds with “xj ≥ xi ”, and it responds with “xi ≥ xj ”
otherwise. (The response is arbitrary when dt (xi ) = dt (xj ).) Let x be an element that is declared
by A to be k-greater than at least q elements.
Let `i = dist(x, xi ), where dist gives the length of the shortest (directed) path in the final graph
Gm . If no such path exists, we set `i = n. After the algorithm is done, we define val(xi ) = `i . We
first claim that the values are consistent with the responses of the comparator. If for some pair of
elements xi , xj the comparator has responded with “xi ≥ xj ”, then Gm contains edge (xi , xj ). This
implies that for any x, dist(x, xj ) ≤ dist(x, xi ) + 1, or `i ≥ `j − 1. Therefore the answer “xi ≥ xj ”
is consistent with the given values.
Consider the nodes xi that x can reach via a path of length at most k. These are exactly the
elements k-smaller than x, and thus there must be at least q of them. For i ≤ k let Si = {xj | `j = i}
and si = |Si |. We claim that for every i ∈ [k], m ≥ s2i /(2si−1 ) − si /2. For a node u ∈ Si , let pred(u)
denote some node in Si−1 such that the edge (pred(u), u) is in the graph. For a node v ∈ Si−1 , let
Si,v = {u ∈ Si | v = pred(u)}. Note that each node u ∈ Si has a single designated pred(u) ∈ Si−1
and hence Si,v ’s are disjoint. Further, let dout (pred(u), u) be the out-degree of pred(u) when the
19
comparison between pred(u) and u was made (as a result of which the edge was added to Gm ).
Note that for any distinct nodes u, u0 ∈ Si,v , dout (v, u) 6= dout (v, u0 ) since the out-degree of v grows
each time an edge to a node in Si,v is added. This implies that
X
X
d = |Si,v |(|Si,v | − 1)/2 .
dout (v, u) ≥
u∈Si,v
d≤|Si,v |−1
By the definition of our comparator, for every u ∈ Si , dm (u) ≥ dout (pred(u), u). This implies that
P
2
X X
X |Si,v |(|Si,v | − 1)
v∈Si−1 |Si,v | − |Si |
m≥
dm (u) ≥
=
.
2
2
v∈Si−1 u∈Si,v
v∈Si−1
Using the inequality between the quadratic and arithmetic means,
2

X
|Si,v |2 ≥ 
v∈Si−1
This implies that m ≥
s2i
2si−1
−
X
|Si,v | /|Si−1 | = s2i /si−1 .
v∈Si−1
si
2.
p
√
We can therefore conclude that si ≤ (2m + si )si−1 ≤ 3msi−1 since si ≤ n ≤ m. By applying
−(i−1)
this inequality and using the fact that s0 = 1 we obtain that s21 /3 ≤ m and si ≤ 3m · (3m/s1 )2
P
−(k−1)
. This holds since
for i > 1. Since i≤k si ≥ q + 1, we thus find that q ≤ 12 · m · (3m/s1 )2
either
−(k−1)
1. (3m/s1 )2
−(k−1)
2. (3m/s1 )−2
−(k−1)
> 1/2 and then 12 · m · (3m/s1 )2
≥ 6m > n, or
≤ 1/2 and then
−i+1
(3m/s1 )−2
−i
/(3m/s1 )−2
= (3m/s1 )−2
−i
−(k−1)
≤ (3m/s1 )−2
≤ 1/2
for i ≤ k − 1, where the penultimate inequality holds since s1 < 3m. In this case
q − s1 ≤
k
X
i=2
si ≤
k
X
−(i−1)
(3m)(3m/s1 )−2
≤
i=2
X
−(k−1)
2i−k (3m)(3m/s1 )−2
i≤k
−(k−1)
< 2(3m)1−2
−(k−1)
s21
If s1 ≥ q/2, then m = Ω(q 2 ) since m ≥ s21 /3. Otherwise we have that
−(k−1) )
m ≥ (q/4)1/(1−2
1/(2(k−1) −1)
/(3s1
),
1/(2(k−1) −1)
}) = Ω(q 1+1/(2
implying
−(k−1) )
m = Ω(max{s21 , q 1/(1−2
/s1
20
k −1)
),
where the final equality can be seen by making the two terms in the max equal.
Also, note that the choice of x amongst the r elements of the theorem statement was arbitrary,
and that s1 is just the out-degree of x. Let smin be the minimum out-degree amongst the r elements.
Then we trivially have m ≥ r · smin . Thus, if smin ≥ q/2 then m ≥ qr/2, and otherwise
−(k−1) )
m = Ω(max{r · smin , q 1/(1−2
1/(2(k−1) −1)
/smin
k−1 )
}) = Ω(q · r1/(2
)
where the final equality is again seen by making the two terms in the max equal.
From here a a lower bound for max-finding by setting r = 1, q = n − 1, and for median-finding
and sorting by setting r = q = n/2. The sorting lower bound holds for k-order selection of the
ith element for any i = c · n for constant 0 < c < 1. More generally, selecting an element of order
k
k−1
i ≥ n/2 requires Ω(i · max{i1/(2 −1) , n1/(2 ) }) comparisons. For i ≤ n/2 we obtain the lower
k
k−1
bound of Ω((n − i) · max{(n − i)1/(2 −1) , n1/(2 ) }) by considering the symmetric problem.
k −1)
Theorem 5.2 Every deterministic max-finding algorithm A with error k requires Ω(n1+1/(2
comparisons.
)
Theorem 5.2 implies that k-Sort and k-Select are optimal up to a constant factor for any constant
k.
Theorem 5.3 Every deterministic algorithm A which k-sorts n elements, or finds an element of
k−1
k-order i for i = c · n with 0 < c < 1 a constant, requires Ω(n1+1/2 ) comparisons.
In addition we obtain that Corollary 3.7 is essentially tight.
Corollary 5.4 Let A be a deterministic max-finding algorithm that makes O(n) comparisons. Then
A has error at least log log n − O(1).
6
Conclusions
We defined a simple and natural model of imprecision in a result of a comparison. The model is
inspired by both imprecision in human judgement of values and also by bounded but potentially
adversarial errors in sporting tournaments. Despite the basic nature of the model and the vast
literature on sorting and searching with faulty comparisons we are not aware of any prior efforts
to address this type of errors. Our results show that there exist algorithms that are robust to
imprecision in comparisons while using substantially fewer comparisons than the naı̈ve methods. For
deterministic algorithms our problem can equivalently be seen as finding a k-king in any tournament
graph (or sorting elements so that each element is a k-king for vertices of lower order) while
minimizing the number of edges checked. Our results generalize previous work on this problem
that considered the case of k = 2 [26].
We note that in most of the results substantially tighter constants can be obtained in the bounds
using small modifications of the algorithms, more careful counting and optimization for small values
of k and n. This would yield algorithms that improve significantly on the naı̈ve approach even for
21
small values of n. We made only a modest effort to improve the constants to make the presentation
of the main ideas clearer.
While our lower bounds show that many of the algorithms we give are essentially optimal a
number of interesting and natural problems are left open.
1. What is the complexity of deterministic maximum finding with error 2? 2-MaxFind uses
O(n3/2 ) comparisons whereas our lower bound is Ω(n4/3 ) comparisons. Resolving the case of
k = 2 is likely to lead to closing of the gap for larger error k.
2. Can error 2 be achieved by a randomized algorithm using O(n) comparisons? SampledTournament
only guarantees error 3.
3. For randomized algorithms it is also natural to consider the expected error of an algorithm.
What is the lowest expected error that can be achieved using a randomized algorithm for
the tasks considered in this paper? Note that in the example presented for the proof of
Theorem 3.2, choosing a random element would give a maximum element with expected error
of 1 and this is the best possible in this example.
4. We have not addressed the complexity of randomized sorting with error k.
References
[1] Gagan Aggarwal, Nir Ailon, Florin Constantin, Eyal Even-Dar, Jon Feldman, Gereon Frahling,
Monika R. Henzinger, S. Muthukrishnan, Noam Nisan, Martin Pál, Mark Sandler, and Anastasios Sidiropoulos. Theory research at Google. SIGACT News, 39(2):10–28, 2008.
[2] Noga Alon and Yossi Azar. Sorting, approximate sorting, and searching in rounds. SIAM J.
Discrete Math, 1(3):269–280, 1988.
[3] Shay Assaf and Eli Upfal. Fault tolerant sorting networks. SIAM J. Discrete Math, 4(4):472–
480, 1991.
[4] Michael Ben-Or and Avinatan Hassidim. The bayesian learner is optimal for noisy binary
search (and pretty good for quantum as well). In Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 221–230, 2008.
[5] Manuel Blum, Robert W. Floyd, Vaughan R. Pratt, Ronald L. Rivest, and Robert E. Tarjan.
Time bounds for selection. J. Comput. Syst. Sci., 7(4):448–461, 1973.
[6] Béla Bollobás and Andrew Thomason. Parallel sorting. Discrete Appl. Math., 6:1–11, 1983.
[7] Ryan S. Borgstrom and S. Rao Kosaraju. Comparison-based search in the presence of errors.
In Proceedings of the 25th Annual ACM Symposium on Theory of Computing (STOC), pages
130–136, 1993.
[8] Ferdinando Cicalese. Fault-Tolerant Search Algorithms - Reliable Computation with Unreliable
Information. Monographs in Theoretical Computer Science. An EATCS Series. Springer, 2013.
22
[9] Herbert Aron David. The Method of Paired Comparisons. Charles Griffin & Company Limited,
2nd edition, 1988.
[10] Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy information. SIAM J. Comput., 23(5), 1994.
[11] Vitaly Feldman. Robustness of evolvability. In Proceedings of COLT, pages 277–292, 2009.
[12] Irene Finocchi, Fabrizio Grandoni, and Giuseppe F. Italiano. Optimal resilient sorting and
searching in the presence of memory faults. Theor. Comput. Sci., 410:4457–4470, 2009.
[13] Irene Finocchi and Giuseppe F. Italiano. Sorting and searching in the presence of memory
faults (without redundancy). In Proceedings of the 36th Annual ACM Symposium on Theory
of Computing (STOC), pages 101–110, 2004.
[14] William I. Gasarch, Evan Golub, and Clyde P. Kruskal. Constant time parallel sorting: an
empirical view. J. Comput. Syst. Sci., 67(1):63–91, 2003.
[15] Roland Häggkvist and Pavol Hell. Parallel sorting with constant time for comparisons. SIAM
J. Comput., 10(3):465–472, 1981.
[16] Roland Häggkvist and Pavol Hell. Sorting and merging in rounds. SIAM Journal on Algebraic
and Discrete Methods, 3(4):465–473, 1982.
[17] Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In SODA,
pages 881–890, 2007.
[18] Michael Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM,
45(6):983–1006, 1998.
[19] H.G. Landau. On dominance relations and the structure of animal societies: III. The condition
for a score structure. The Bulletin of Mathematical Biophysics, 15(2):143–148, 1953.
[20] László Lovász. An Algorithmic Theory of Numbers, Graphs, and Convexity. CBMS-NSF
Regional Conference Series in Applied Mathematics 50, SIAM, 1986.
[21] Stephen B. Maurer. The king chicken theorems. Mathematics Magazine, 53(2):67–80, March
1980.
[22] Andrzej Pelc. Searching games with errors—fifty years of coping with liars. Theor. Comput.
Sci., 270(1-2):71–109, 2002.
[23] Bala Ravikumar, K. Ganesan, and K. B. Lakshmanan. On selecting the largest element in
spite of erroneous information. In Proceedings of the 4th Annual Symposium on Theoretical
Aspects of Computer Science (STACS), pages 88–99, 1987.
[24] Alfréd Rényi. On a problem in information theory. Magyar Tud. Akad. Mat. Kutató Int. Közl,
6:505–516, 1962.
23
[25] Ronald L. Rivest, Albert R. Meyer, Daniel J. Kleitman, Karl Winklmann, and Joel Spencer.
Coping with errors in binary search procedures. J. Comput. Sys. Sci., 20(3):396–405, 1980.
[26] Jian Shen, Li Sheng, and Jie Wu. Searching for sorted sequences of kings in tournaments.
SIAM J. Comput., 32(5):1201–1209, 2003.
[27] Scott M. Smith and Gerald S. Albaum. Fundamentals of Marketing Research. Sage Publications, Inc., first edition, 2005.
[28] Louis Leon Thurstone. A law of comparative judgment. Psychological Review, 34:273–286,
1927.
[29] Stanislaw Marcin Ulam. Adventures of a Mathematician. Scribner’s, New York, 1976.
[30] Leslie G. Valiant. Parallelism in comparison problems. SIAM J. Comput., 4(3):348–355, 1975.
[31] Andrew Chi-Chih Yao and Frances Foong Yao. On fault-tolerant networks for sorting. SIAM
J. Comput., 14(1):120–128, 1985.
24