Edge finding filtering algorithm for discrete cumulative resources in O(kn log n)

Edge Finding Filtering Algorithm for Discrete
Cumulative Resources in O(kn log n)
Petr Vilı́m
ILOG S.A. an IBM Company, 9, rue de Verdun, BP 85
F-94253 Gentilly Cedex, France
petr [email protected]
Abstract. This paper presents new Edge Finding algorithm for discrete
cumulative resources, i.e. resources which can process several activities
simultaneously up to some maximal capacity C. The algorithm has better
time complexity than the current version of this algorithm: O(kn log n)
versus O(kn2 ) where n is number of activities on the resource and k
is number of distinct capacity demands. Moreover the new algorithm is
slightly stronger and it is able to handle optional activities. The algorithm is based on the Θ-tree – a binary tree data structure which already
proved to be very useful in filtering algorithms for unary resource constraints.
Note: This version of the paper differs from the original: it fixes several
mistakes (mostly typos). All fixes are marked by footnotes and they include author of the fix.
Version: October 28th 2010
1
Introduction
Nowadays, constraint based scheduling engines like IBM ILOG CP-Optimizer
[1] allow to describe and solve very complex scheduling problems involving a
variety of different constraints. This paper describes a filtering algorithm called
Edge Finding for one of them – for discrete cumulative resource constraint.
Let us demonstrate the problem on a simple example on Figure 1. Note
that this example may be just a small part of much more complex problem.
estA = estD = 0
lctA = lctB = lctC = 5
C
D
C=3
A
B
0
estB = estC = 2
5
Fig. 1. An example: estD can be updated from 0 to 4.
10
There are three equivalent workers (a resource with capacity C = 3) who must
perform four different activities T = {A, B, C, D}. Activity A requires all three
workers (cA = 3) for one hour (pA = 1), activity B requires only one worker
(cB = 1) but for 3 hours (pB = 3) and remaining two activities C and D
require two workers each (cC = cD = 2), activity C for 2 hours (pC = 2) and
activity D for 3 hours (pD = 3). Moreover the earliest possible starting time
of activities A and D is zero (estA = estD = 0), for activities B and C it is 2
(estB = estC = 2). Latest possible completion time (deadline) for activities A,
B and C is 5 (lctA = lctB = lctC = 5), activity D does not have a deadline
(lctD = ∞).
Looking closely to the problem we can see that there is no way for D to start
before 4. Therefore we can update estD from 0 to 4 and this way limit the search
space of the problem. The rest of this paper describes the algorithm (called Edge
Finding) which performs such an update.
Note that there are also filtering algorithms for discrete cumulative resource
other than Edge Finding. For example Timetable propagation [2], Not-First/NotLast [3], Energetic Reasoning [2], Max Energy propagation [4] or Extended Edge
Finding [5]. However Timetable and Edge Finding are the ones which are used
most of the time.
Let us quickly review existing Edge Finding algorithms for discrete cumulative resources. To the author’s knowledge the current state-of-the-art algorithm
can be found in [5]. In this paper Luc Mercier and Pascal Van Hentenryck proved
that the original cumulative Edge Finding algorithm with time complexity O(n2 )
in [2] is incomplete, and therefore they designed new (complete) algorithm with
time complexity O(kn2 ).
This paper further improves the algorithm [5] in several aspects:
– Θ-tree data structure improves time complexity from O(kn2 ) to O(kn log n).
– Better usage of relation “ends before end” makes the filtering a little bit
stronger. There are cases when the new algorithm propagates while the old
one does not (Section 6.2).
– The algorithm can be easily adapted to handle optional activities. Although
propagation of optional activities can be further improved [6], it is already
pretty strong. We will show how to handle optional activities at the end of
the paper (Section 9).
– The algorithm is based on modified Edge Finding rules which are more
suitable for propagation. We provide a proof that the new rules are not
weaker than the original ones (Section 8).
Like algorithm [5] the new algorithm has two phases:
Detection phase tries to discover necessary relative positions of activities on
the resource. The result of this phase is a partial knowledge of a relation
“ends before end” (see later), which will be used in the next phase. For the
example on Figure 1 the algorithm in this phase detects that activity D must
end after the end of {A, B, C}.
Adjustment phase utilizes results of the previous phase to improve temporal
bounds of activities – earliest start times and latest completion times.
In comparison with algorithm [5] the time complexities of both phases are improved. For detection phase from O(n2 ) to O(n log n), for adjustment phase from
O(kn2 ) to O(kn log n). For simplicity we will describe each phase separately.
We present only the algorithm to update earliest start times (not latest completion times) because the algorithm for update of latest completion times is
symmetrical.
Note also that Edge Finding algorithm is not idempotent and therefore it is
usually repeated until a fixpoint is reached.
2
Basic Notation
Let us formalize the notation already used in the introduction. The input of the
algorithm is a discrete cumulative resource with capacity C ∈ N+ and a set of
activities T (|T | = n) which must be processed by the resource. Each activity
i ∈ T is characterized by the following attributes:
–
–
–
–
earliest possible start time esti ∈ N,
latest possible completion time lcti ∈ N,
processing time (duration) pi ∈ N – a constant,
required capacity ci ∈ N – a constant ci ≤ C.
Activities are not preemptive, that is, if an activity i starts at time t it must
continue without interruption until time t + pi where it ends. During the whole
processing from t to t + pi it requires capacity ci from the resource. At any
time, the total capacity required from the resource cannot exceed the maximum
capacity C. We define k to be number of distinct capacity demands k = |{cl , l ∈
T }|.
Values esti and lcti can change – they can be updated by other filtering
algorithms or by the Edge Finding algorithm itself. Therefore the input of the
Edge Finding algorithm are current bounds esti and lcti , the output are new
(updated) bounds.
Note that at any time the following inequality must hold for every activity i:
esti + pi ≤ lcti
If it does not hold then the problem does not have any solution and the propagation ends (a fail ).
From the processing time and required capacity we can compute an energy
of an activity i:
ei = ci pi
The energy corresponds to the area occupied by the activity on the resource
when depicted like on Figure 1.
The notation for earliest start time, latest completion time and energy can
be easily extended for a set of activities Ω ⊆ T :
estΩ = min {esti , i ∈ Ω}
lctΩ = max {lcti , i ∈ Ω}
X
eΩ =
ei
i∈Ω
3
Earliest Completion Time, Energy Envelope
A critical role in the algorithm plays a computation of possible earliest completion time of a set of intervals Θ ⊆ T . This computation was already described
in detail in [4], therefore here we only quickly repeat the main idea. We defined
a lower bound Ect(Θ) of earliest completion time of a set of activities Θ ⊆ T as:
Env(Θ)
Ect(Θ) =
C
where Env(Θ) is so-called energy envelope of set Θ:
Env(Θ) = max{C estΩ + eΩ }
Ω⊆Θ
3.1
(1)
Cumulative Θ-tree
The paper [4] also describes how to compute Env(Θ). The idea is to organize
set Θ in a balanced binary tree which we call cumulative Θ-tree. Activities are
represented by leaf nodes and sorted by esti from left to right. Each node v of
the tree holds the following values:
ev = eLeaves(v)
Envv = Env (Leaves (v))
(2)
(3)
Where Leaves(v) is a set of all activities represented by leaves of the subtree
rooted in v.
Figure 2 shows a Θ-tree for Θ = {A, B, C, D} from Figure 1. Notice that the
energy envelope of the represented set Θ is equivalent to the value Env of the
root node. We can conclude that Ect ({A, B, C, D}) = ⌈16/3⌉ = 6.
For a leaf node v representing an activity i ∈ T the values in the tree are set
to:
ev = ei
Envv = Env ({i})
For internal nodes v these values can be computed recursively from their children
nodes left(v) and right(v) as shown in the following proposition.
e = 16
Env = 16
e=9
e=7
Env = 9
Env = 13
estA = 0
estD = 0
estB = 2
eA = 3
eD = 6
eB = 3
Env = 3
Env = 6
Env = 9
estC = 2
eC = 4
Env = 10
Fig. 2. An example of a Θ-tree for Θ = {A, B, C, D} from Figure 1.
Proposition 1. For an internal node v, values ev and Envv can be computed
by the following recursive formulas:
Proof. See [4].
ev = eleft(v) + eright(v)
Envv = max Envleft(v) + eright(v) , Envright(v)
(4)
(5)
⊓
⊔
Thanks to formulas (4) and (5), computation of values ev and Envv can
be integrated within standard operations with balanced binary trees without
changing their usual time complexities.
4
Relation “Ends before End”
Before going into details about Edge Finding, let us introduce a notion of ends
before end. We say that an activity j ends before the end of an activity i (denoted
by j ⋖ i) if in all solutions end(j) ≤ end(i). The goal of the Edge Finding
algorithm is to discover as much of the relation ⋖ as possible and use it to
update temporal bounds. Until a solution is found the relation “ends before
end” is known only partially. Therefore in the following we will use the notation
j ⋖ i in the sense “it is known that in all solutions j ends before the end of i”.
The notation for “ends before end” can be extended also to sets of activities:
∀Ω ⊂ T, ∀i ∈ T \ Ω :
5
Ω ⋖ i ⇔ (∀j ∈ Ω : j ⋖ i)
Edge Finding: Detection Rule
Let us start by definition of a left cut of T by activity j:
LCut(T, j) = {l, l ∈ T & lctl ≤ lctj }
To detect the relation ⋖ we will use the following rule:
∀j ∈ T, i ∈ T \ LCut (T, j) :
(Ect (LCut (T, j) ∪ {i}) > lctj ⇒ LCut(T, j) ⋖ i)
The idea of this rule follows. The set LCut(T, j) must be processed before lctj .
So if there is not enough time to process i together with LCut(T, j) then the
activity i must end after LCut(T, j). Note that this rule is different from the
original Edge Finding rule. We will show that this new rule is not weaker later
in Section 8.
The rule above can be rewritten using energy envelope into a form more
suitable for the algorithm:
∀j ∈ T, i ∈ T \ LCut (T, j) :
(Env (LCut (T, j) ∪ {i}) > C lctj ⇒ LCut(T, j) ⋖ i) (EF1)
Our goal is to discover as much of the relation ⋖ as possible. Therefore for
each activity i ∈ T we are looking for an activity j ∈ T with maximal1 lctj such
that LCut(T, j) ⋖ i can be detected by the rule (EF1). This is the task for the
first part of the algorithm.
6
Detection Algorithm
Notice that once we prove by (EF1) that LCut (T, j) ⋖ i then it is pointless to
evaluate the rule (EF1) for the same activity i and any j ′ ∈ T such that lct′j ≤
lctj because it cannot bring any new information (LCut(T, j ′ ) ⊆ LCut(T, j)).
The algorithm is similar to the Edge Finding algorithm for unary resource [7].
We iterate over activities j in non-increasing order by lctj and we maintain a set
Λ ⊆ T \ LCut (T, j) of all activities i for which we still did not find a set which
must end before end of i. In each step of the algorithm we check whether there
is some activity i ∈ Λ such that the rule (EF1) proves that LCut (T, j) ⋖ i. In
other words, we test whether the following inequality holds:
max {Env (LCut (T, j) ∪ {i})} > C lctj
i∈Λ
– If it holds then we find the responsible activity i ∈ Λ and conclude that
LCut (T, j) ⋖ i. Therefore we can remove i from Λ.
– If it does not hold then we move activity j into Λ and continue by next
activity j (because there is no activity i such that LCut(j) ⋖ i can be proved
by (EF1)).
To formalize the algorithm let us define:
Env (Θ, Λ) = max {Env (Θ ∪ {i})}
i∈Λ
Although we did not show yet how to compute Env (Θ, Λ) we can already present
the resulting Algorithm 1.1. The result of the computation is the array prec
which has the following meaning:
∀i ∈ T : {l, l ∈ T & lctl ≤ prec [i]} ⋖ i
1
Maximality of lctj assures that for any other j ′ ∈ T satisfying LCut(T, j ′ ) ⋖ i by
(EF1) we have LCut(T, j ′ ) ⊆ LCut(T, j).
Algorithm 1.1. Edge Finding: Detection
1
2
3
4
5
6
7
8
9
10
11
12
13
for i ∈ T do
prec [ i ] := −∞ ;
Θ := T ;
Λ := ∅ ;
for j ∈ T in non-increasing order of lctj do begin
while Env(Θ, Λ) > C lctj do begin
i := activity from Λ responsible for Env(Θ, Λ) ;
prec [ i ] := lctj ; // means: LCut(T, j) ⋖ i
Λ := Λ \ {i} ;
end ;
Θ := Θ \ {j} ;
Λ := Λ ∪ {j} ;
end ;
In the algorithm, Θ = LCut(T, j) unless there are duplicities in lctj (the algorithm is correct even with such duplicities). In the following we will concentrate
on the computation of Env (Θ, Λ) and the proof that the algorithm 1.1 has time
complexity O(n log n).
6.1
Computation of Env (Θ, Λ)
The idea is to extend cumulative Θ-tree into Θ-Λ-tree in a similar way it was
done for unary resource in [7]. The cumulative Θ-Λ-tree is a balanced binary tree
where each leaf represents one activity from the set Θ or Λ. Leaves are sorted
from left to right according to esti . Each node of the tree holds the following
values:
ev = eLeaves(v)∩Θ
eΛ
v = eLeaves(v)∩Θ +
max
{ei }
i∈Leaves(v)∩Λ
Envv = Env (Leaves (v) ∩ Θ)
EnvΛ
v = Env (Leaves (v) ∩ Θ, Leaves (v) ∩ Λ)
Notice that Env (Θ, Λ) is equivalent to EnvΛ
v in the root node. For an example
of the cumulative Θ-Λ-tree see Figure 3.
For a leaf node v an activity i ∈ Θ ∪ Λ these values are set to:
(
(
ei if i ∈ Θ
−∞ if i ∈ Θ
ev =
eΛ
v =
0 if i ∈ Λ
ei
if i ∈ Λ
(
(
C esti + ei if i ∈ Θ
−∞
if i ∈ Θ
Envv =
EnvΛ
v =
−∞
if i ∈ Λ
C esti + ei if i ∈ Λ
e = 10
Env = 13
e=3
e
Λ
= 16
Env
Λ
= 16
e=7
Env = 3
Env = 13
eΛ = 9
eΛ = −∞
EnvΛ = 9
EnvΛ = −∞
estA = 0
estD = 0
estB = 2
e=3
e=0
e=3
Env = 3
e
Λ
Env
Λ
= −∞
= −∞
Env = −∞
Λ
e
Env
Λ
=6
=6
Env = 9
Λ
e
Env
Λ
= −∞
= −∞
estC = 2
e=4
Env = 10
Λ
= −∞
Λ
= −∞
e
Env
Fig. 3. An example of a Θ-Λ-tree for Θ = LCut(T, A) = {A, B, C} and Λ = {D} from
Figure 1. We see that Env(Θ, Λ) = 16 which is more than C lctA = 15 and therefore
{A, B, C} ⋖ D.
For internal nodes v these values are computed recursively from their children
nodes left(v) and right(v):
Λ
Proposition 2. For an internal node v values ev , eΛ
v , Envv and Envv can be
computed by the following formulas:
ev = eleft(v) + eright(v)
o
n
Λ
Λ
eΛ
v = max eleft(v) + eright(v) , eleft(v) + eright(v)
Envv = max Envleft(v) + eright(v) , Envright(v)
o
n
Λ
Λ
Λ
,
Env
EnvΛ
=
max
Env
+
e
,
Env
+
e
right(v)
left(v)
right(v)
v
left(v)
right(v)
(6)
(7)
(8)
(9)
Proof. First notice that formulas (6) and (8) are the same as formulas (4) and
(5) in Proposition 1. Addition of new leaves representing Λ into the tree cannot
invalidate these formulas because for these leaves v we have ev = 0 and Envv =
−∞. Therefore formulas (6) and (8) hold by Proposition 1.
Formula (7) is simple to prove. It is enough to realize that the difference
between computation of ev by (6) and computation of eΛ
v is that it is allowed to
use one of the activities i ∈ Λ. This activity i can be either in the left subtree of v
(and in this case we can use eΛ
left(v) instead of eleft(v) ) or in the right subtree of v
(and we can use eΛ
instead
of eright(v) ). Putting this together we transform
right(v)
formula (6) into (7).
It remains to prove formula (9). Again the difference between computation of
Envv and EnvΛ
v is that it is allowed to use one of the activities i ∈ Λ. This activity
can be either in the left subtree of v (and in this case we can use EnvΛ
left(v) instead
of Envleft(v) ) or in the right subtree of v (and we can use EnvΛ
right(v) instead of
estM = estN = 2
estO = 0
M
lctM = lctN = 5
C=3
N
O
0
5
10
Fig. 4. An example: {M, N } ⋖ O but the rule (EF1) is not able to detect it.
Λ
EnvΛ
right(v) or eright(v) instead of eright(v) but not both). This way we transform
formula (8) into (9).
⊓
⊔
Thanks to these recursive formulas it is possible to recompute internal values
within standard operations with balanced binary trees without changing their
time complexity. Therefore lines 9, 11 and 12 of Algorithm 1.1 has time complexity O(log n) and line 6 has time complexity O(1). To prove that time complexity
of the whole Algorithm 1.1 is O(n log n) it remains to show that time complexity
of line 7 is also O(log n).
The activity i ∈ Λ responsible for Env(Θ, Λ) can be found by following a path
from the root of the tree to the responsible leaf. In each internal node we can
recognize in O(1) whether the responsible activity is in the left or right subtree
by analyzing which part of the formulas (9) or (7) was used in the given node:
(
responsibleeΛ (left (v))
if eΛ (v) = eΛ
left(v) + eright(v)
responsibleeΛ (v) =
Λ
responsibleeΛ (right (v)) if e (v) = eleft(v) + eΛ
right(v)

Λ
Λ

responsibleEnvΛ (right (v)) if Env (v) = Envright(v)
responsibleEnvΛ (v) = responsibleeΛ (right (v))
if EnvΛ (v) = Envleft(v) + eΛ
right(v)


if EnvΛ (v) = EnvΛ
responsibleEnvΛ (left (v))
left(v) + eright(v)
We start the search in the root node r looking for responsibleEnvΛ (r) and continue down the tree using the formulas above (and possibly switching from
responsibleEnvΛ (v) to responsibleeΛ (v) on the path) until we reach a leaf.
6.2
Improving Detection
Consider the example on Figure 4. In this example we can see that in every
solution end(M ) ≤ end(O) because the maximum possible value for end(M ) is
lctM = 5 and the minimum possible value for end(O) is estO + pO = 5. Therefore
M ⋖ O. Similarly N ⋖ O. However Edge Finding rule (EF1) is not able to detect
that {M, N } ⋖ O and we miss the update of estO from 0 to 5. It is a similar
situation to Detectable Precedences for unary resource described in [7].
The idea is to improve the propagation by improving the knowledge of the
relation ⋖ stored in the array prec:
prec[i] := max {prec [i] , esti + pi }
It takes time O(n) to update all prec[i] according to the formula above.
estX = estW = estZ = 0
lctW = lctX = lctY = 7
W
Z
C=2
X
Y
0
5
estY = 6
10
Fig. 5. An example: estZ can be updated from 0 to 2.
7
Time Bound Adjustment
Let us return again to the example on Figure 1. The algorithm presented in the
previous chapter detected that {A, B, C} ⋖ D. We will try to use this knowledge
to update estD . Notice that activity A is actually not important for the update
(but it was important in the previous phase to realize that {A, B, C} ⋖ D), it
is a set Ω = {B, C} ⊂ Θ which determines new estD . With this set Ω we can
compute new value of estD denoted as est′D :
eΩ −(C − cD )(lctΩ − estΩ )
7 − (3 − 2)(5 − 2)
=4
est′D = estΩ +
=2+
cD
2
However when LCut(T, j) ⋖ i we cannot use just any subset Ω ⊆ LCut(T, j)
to compute update of esti as we did for estD above. Consider the example on
Figure 5. Here {W, X, Y } ⋖ Z but we cannot use Ω = {Y } because the result
would be invalid:
1 − (2 − 1)(7 − 6)
eΩ −(C − cZ )(lctΩ − estΩ )
=6
= 6+
estΩ +
cZ
1
The valid update would be to set estZ to 2, not to 6. The reason that we cannot
use Ω = {Y } for update of estZ is that there is not enough energy in Ω = {Y }
to be in potential conflict with Z.
Let us generalize the idea demonstrated on these examples. When LCut(T, j)⋖
i then we want to update esti the following way:
LCut(T, j) ⋖ i
⇒
est′i := max {update (j, ci ) , esti }
(EF2)
where:
update(j, c) =
max
Ω⊆LCut(T,j)
eΩ >(C−c)(lctΩ − estΩ )
eΩ − (C − c) (lctΩ − estΩ )
estΩ +
c
The condition eΩ > (C − c)(lctΩ − estΩ ) makes sure that we do not make any
invalid update as described above.
In the following we will describe how to compute values update(j, c). When
all values update(j, c) are computed then update of esti using array prec and
formula (EF2) is trivial.
Let’s assume for simplicity that there are no duplicates in the set {lctj , j ∈
T }. Therefore if we sort activities T by increasing lctj in a sequence j1 , j2 , . . . , jn
we get:
LCut(T, j1 ) ( LCut(T, j2 ) ( · · · ( LCut(T, jn )
So when we compute value update(jl , c) we do not have to iterate again on all
possible subsets Ω ⊆ LCut(T, jl ), we can use the fact that we already considered
part of them in the computation of update(jl−1 , c). I.e. in the outermost cycle
of the algorithm we iterate over all c ∈ {cm , m ∈ T } and in the inner cycle we
iterate over all jl ∈ T and compute:
(
diff (j1 , c)
when l = 1
update(jl , c) =
(10)
max {update (jl−1 , c) , diff (jl , c)} when l > 1
where:
diff(j, c) =
max
Ω⊆LCut(T,j)
eΩ >(C−c)(lctj − estΩ )
estΩ +
eΩ −(C − c)(lctj − estΩ )
c
(11)
So in the computation of diff(j, c) we “pretend” that all sets Ω ⊆ LCut(T, j)
has lctΩ = lctj . This is not true, there may be sets Ω ( LCut(T, j) such that
lctΩ < lctj . However these sets are correctly considered during computation of
diff(j ′ , c) such that lctj ′ = lctΩ .
Let’s define2 function maxest(j, c) as:
maxest(j, c) = max {estΩ , Ω ⊆ LCut (T, j) & eΩ > (C − c)(lctj − estΩ )}
Notice that for a particular set Ωm which defines maxest(j, c), i.e. estΩm =
maxest(j, c), we have:
eΩm −(C − c)(lctj − estΩ )
> estΩm = maxest(j, c)
estΩm +
c
and therefore diff(j, c) > maxest(j, c). Now we will show that:
eΩ −(C − c)(lctj − estΩ )
estΩ +
diff(j, c) =
max
c
Ω⊆LCut(T,j)
(12)
estΩ ≤maxest(j,c)
The reason follows. The original condition was more restrictive than the new one:
in (12) we iterate over more sets Ω than in (11). However for every additional
set Ω we have estΩ ≤ maxest(j, c) and eΩ ≤ (C − c)(lctj − estΩ ) therefore:
eΩ −(C − c)(lctj − estΩ )
estΩ +
≤ estΩ ≤ maxest(j, c)
c
And we already know that diff(j, c) > maxest(j, c). Therefore newly added sets
cannot influence the resulting maximum value in formula (12).
2
Thanks to Luc Mercier and Joseph Scott for correcting min/max error in the paper.
Formula (12) is algebraically equivalent to:
Env(j, c) − (C − c) lctj
diff(j, c) =
c
(13)
where:
Env(j, c) =
max
Ω⊆LCut(T,j)
eΩ >(C−c)(lctj − estΩ )
{C estΩ + eΩ }
(14)
We can split each set Ω by maxest(j, c) into two parts:
Ω1 = {l, l ∈ Ω & estl ≤ maxest (j, c)}
Ω2 = {l, l ∈ Ω & estl > maxest (j, c)}
And then C estΩ + eΩ = C estΩ1 + eΩ1 + eΩ2 . Let’s apply this idea on formula (14).
We define:
α(j, c) = {l, l ∈ LCut (T, j) & estl ≤ maxest (j, c)}
β(j, c) = {l, l ∈ LCut (T, j) & estl > maxest (j, c)}
And (14) is equivalent to:
Env(j, c) =
max
Ω1 ⊆α(j,c)
Ω2 ⊆β(j,c)
{C estΩ1 + eΩ1 + eΩ2 } =
= eβ(j,c) + Env (α (j, c))
(15)
We can compute Env (α (j, c)) by building Θ-tree for the set α(j, c) as shown
in Proposition 1. However it is more suitable for the algorithm to build Θ-tree for
the whole set LCut(T, j) and cut it into two parts just before the computation
of Env (α (j, c)). The cut operation splits the tree into two trees, it is done in
such a way that all activities l ∈ LCut(T, j) such that estl ≤ maxest(j, c) go to
the left part while the others go into the right part. See Figure 6 for an example.
The cut operation has time complexity O(log n) and it splits the set LCut(T, j)
into sets α(j, c) and β(j, c). The value Env (α (j, c)) can be found in the root
node of the Θ-tree for α(j, c), and eβ(j,c) can be found in the root node of the
Θ-tree for β(j, c).
It remains to show how to compute maxest(j, c). The value maxest(j, c) was
defined as:
maxest(j, c) = min {estΩ , Ω ⊆ LCut (T, j) & eΩ > (C − c)(lctj − estΩ )}
The condition eΩ > (C − c)(lctj − estΩ ) is algebraically equivalent to:
(C − c) estΩ + eΩ > (C − c) lctj
(16)
Notice that the left part of this inequality is very similar to the computation of
energy envelope, just C is replaced by (C − c). Let us define a new variant of
energy envelope Envc :
Envc (Θ) = max {(C − c) estΩ + eΩ }
Ω⊆Θ
e=9
α(j, c)
β(j, c)
Env = 13
Envc = 9
e=8
e=1
Env = 8
Env = 13
Envc = 8
Envc = 7
estW = 0
estX = 0
e=2
e=6
Env = 2
Env = 6
Envc = 2
Envc = 6
maxest(j, c)
estY = 6
e=1
Env = 13
Envc = 7
Cut
Fig. 6. Example: computation of Env(j, c) for c = 1 and j = Y from example on
Figure 5. Therefore LCut(T, j) = {W, X, Y }. Situation just before the cut.
The computation of Envc can be done again using Θ-tree by Proposition 1. We
can compute Env and Envc in the same Θ-tree as shown on Figure 6. Now we
can see that because of condition (16) it must hold3 :
Envc (β (j, c)) ≤ (C − c) lctj
but if we would include activities l with estl = maxest(j, c) into the right tree
(in other words if we would do the cut more on the left) then this condition
would not hold. That allows to find a leaf l with estl = maxest(j, c) by following
a path from the root the leaf as shown in Algorithm 1.2. Using this procedure
we can compute all values update(j, c) by Algorithm4 1.3 with time complexity
O(kn log n). Note that once update(j, c) is computed we can trivially update
values esti using (EF2).
8
Relation with standard Edge Finding
We will show that the algorithm described in the paper does not miss any update
done by Edge Finding algorithm described in [5]. It is enough to prove that the
original Edge Finding propagation rules are subsumed by the new rules (EF1)
and (EF2).
The traditional Edge Finding rule is:
∀i ∈ T, ∀Θ ⊆ T \Θ : C lctΘ − estΘ∪{i} < eΘ∪{i} ⇒ esti := max (esti , newesti )
where:
newesti =
3
4
max
Ω⊆Θ
eΩ >(C−c)(lctΩ − estΩ )
eΩ − (C − c) (lctΩ − estΩ )
estΩ +
ci
Thanks to Albert Oliveras Llunell for correcting a typo in the original paper.
Thanks to Joseph Scott for correcting a typo on line 9.
(17)
Algorithm 1.2. Computation of maxest(j, c) using Θ-tree for LCut(T, j)
1
2
3
4
5
6
7
8
9
10
11
12
v := root ;
E := 0 ;
while v is not a leaf node do begin
if Envc (right (v)) + E > (C − c) lctj then
v := right(v) ;
else begin
E := E + eright(v) ;
v := left(v) ;
end ;
end ;
l := activity represented by leaf v ;
return estl ;
Algorithm 1.3. Computation of all update(j, c)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
for c ∈ {cm , m ∈ T } do begin
Θ := ∅ ;
upd := −∞ ;
for j ∈ T in non-decreasing order by lctj do begin
Θ := Θ ∪ {j} ;
maxest := maxest(j, c) ;
// see Algorithm 1.2
(α, β) := Cut ( Θ , maxest ) ;
Env(j, c) l := e(β) + Env(α)
// see (15)
m;
Env(j,c)−(C−c) lct
j
;
diff :=
c
upd := max ( upd , diff ) ;
update(j, c) := upd ;
Θ := join ( α , β ) ;
end ;
end ;
// see (13)
// see (10)
Let’s consider an activity i and sets Θ and Ω which achieves the best update
by the rule above. Then we can define j to be an activity from Θ such that
lctj = lctΘ . And because Θ ⊆ LCut(T, j) we can see that the rule (EF1) holds
for i and j. And because Ω ⊆ Θ ⊆ LCut(T, j) the update by the rule (EF2)
must be at least the same as by the original rule (17).
9
Optional Activities
Optional activity is an activity which may or may not be present in the resulting
schedule [1]. Optional activities makes modeling of certain types of problems
much easier (for example dealing with alternatives) and it also allows the CP
engine to propagate better. Therefore it is very important that Edge Finding
algorithm can handle optional activities.
To handle optional activities we can use the same idea as suggested in [4]: instead of changing the algorithm we can just change its input data. If an activity j
is optional, we set for the algorithm lctj = ∞ regardless the real value of lctj .
This way the algorithm can never conclude that j ⋖ i for any activity i because
from the point of view of the algorithm the activity j can be always scheduled
later than i. Therefore optional activities will be influenced by non-optional ones,
but non-optional activities will not be influenced by optional ones.
Note that propagation for optional activities could be further improved as
suggested for unary resource in [6]. However it would probably result in increase
of time complexity of the algorithm.
10
Experimental Results
Speed of the presented algorithm was tested against incomplete algorithm [2]
by measuring time needed for initial propagation. These tests was done on cumulative job-shop instances with resources of capacity 2 (note that in this case
k = 1). For n = 20 activities on resource the presented algorithm is on average
faster by factor 1.34, for n = 30 it is faster by factor 1.60, for n = 40 by 1.99,
for n = 60 by 2.68, for n = 100 by 4.15 and for n = 200 by factor 7.35.
11
Conclusions
This paper presents a new Edge Finding algorithm for discrete capacity resources. The new algorithm is stronger than the state-of-the-art algorithm [5], it
is faster (in term of time complexity) and it can handle optional activities. The
algorithm is successfully used by CP-Optimizer [1] starting from version 2.0.
References
1. : IBM ILOG CP Optimizer http://www.ilog.com/products/cpoptimizer/.
2. Philippe Baptiste, C.L.P., Nuijten, W.: Constraint-Based Scheduling: Applying
Constraint Programming to Scheduling Problems. Kluwer Academic Publishers
(2001)
3. Schutt, A., Wolf, A., Schrader, G.: Not-first and not-last detection for cumulative
scheduling in O(n3 log n). In: 16th International Conference on Applications of
Declarative Programming and Knowledge Management, INAP 2005, Springer (2005)
66–80
4. Vilı́m, P.: Max energy filtering algorithm for discrete cumulative resources. In
Willem-Jan van Hoeve, J.N.H., ed.: Proceedings of CP-AI-OR 2009. Volume 5547
of Lecture Notes in Computer Science., Springer-Verlag (2009) 294–308
5. Mercier, L., Hentenryck, P.V.: Edge finding for cumulative scheduling. Informs
Journal of Computing 20(1) (2008) 143–153
6. Kuhnert, S.: Efficient edge-finding on unary resources with optional activities. In:
Proceedings of INAP 2007 and WLP 2007. (2007)
7. Vilı́m, P.: Global Constraints in Scheduling. PhD thesis, Charles University in
Prague, Faculty of Mathematics and Physics, Department of Theoretical Computer
Science and Mathematical Logic (2007)