Structural patterns heuristics: Basic idea and concrete instance

Structural Patterns Heuristics: Basic Idea and Concrete Instance
Michael Katz
Carmel Domshlak∗
Faculty of Industrial Engineering
Technion, Israel
[email protected]
Faculty of Industrial Engineering
Technion, Israel
[email protected]
Abstract
Considering admissible heuristics for cost-optimal
planning, we present a generalization of patterndatabase homomorphism abstractions, called
“structural patterns”. The basic idea of structural
patterns boils down to projecting the problem in
hand to provably tractable fragments of optimal
planning. The key motivation behind this generalization of PDBs is to alleviate the requirement for
the projections to be of a low dimensionality.
Introduction
The difference between various algorithms for planning
as heuristic search is mainly in the heuristic functions
they define and use. Most typically, an (admissible)
heuristic function for domain-independent planning is
defined as the (optimal) cost of achieving the goals in an
over-approximating abstraction of the planning problem in hand (Pearl 1984; Bonet & Geffner 2001). Such
an abstraction is obtained by relaxing certain constraints
that are present in the specification of the real problem,
and the desire is to obtain a tractable (that is, solvable
in polynomial time), and, at the same time, informative abstract problem. The main question is thus: What
constraints should we relax to obtain such an effective
over-approximating abstraction?
Conceptually, one can distinguish between homomorphism and embedding abstractions, and the former are our focus in this paper. A homomorphism
abstraction systematically contracts several states to
create a single abstract state. Most typically, such a
state-gluing is obtained by projecting the original problem onto a subset of its parameters, as if ignoring
the constraints that fall outside the projection. Homomorphisms have been successfully explored in the
scope of domain-independent pattern database (PDB)
heuristics (Edelkamp 2001; Haslum, Bonet, & Geffner
∗
The work of both authors is partly supported by Israel
Science Foundation and C. Wellner Research Fund.
c 2007, American Association for Artificial IntelCopyright ligence (www.aaai.org). All rights reserved.
2005), inspired by the (similarly named) problemspecific heuristics for search problems such as (k 2 − 1)puzzles, Rubik’s Cube, etc. (Culberson & Schaeffer
1998). A core property of the PDB heuristics is that
the problem is projected onto a space of small (up to
logarithmic) dimensionality so that reachability analysis in that space could be done by exhaustive search.
Note that this constraint implies an inherent scalability
limitation of the PDB heuristics—as the problems of interest grow, limiting patterns to logarithmic dimensionality will unavoidably make them less and less informative with respect to the original problems.
In this paper we suggest a generalization of the PDB
abstractions to what we call “structural patterns”. In
itself, the idea of structural patterns is simple, and it corresponds to projecting the original problem to provably
tractable fragments of optimal planning. At least theoretically, moving to structural patterns alleviates the requirement for the projections to be of a low dimensionality. Moreover, we show that the idea is not of a theoretical interest only. We introduce a concrete structural
patterns abstraction based on decomposing the problem
in hand along its causal graph, and show that the induced admissible heuristic can provide more informative estimates than its state-of-the-art alternatives.
Formalism and Notation
Problems of classical planning correspond to reachability analysis in state models with deterministic actions and complete information. In this work we focus on state models corresponding to the SAS+ formalism (Bäckström & Nebel 1995) that captures problems
with multi-valued state variables.
Definition 1 A SAS+ problem instance is given by a
quadruple Π = hV, A, I, Gi, where:
• V = {v1 , . . . , vn } is a set of state variables, each
associated with a finite domain dom(vi ).
• the initial state I is a complete assignment, and the
goal G is a partial assignment to V .
B
A
c1
p2
c2
t
D
c₁
F
E
c₂
p₁
t
p₂
c3
C
c₃
(a)
G
p1
B
Figure 1: Logistics example adapted from Helmert
(2006). Deliver p1 from C to G, and p2 from F to E
using the cars c1 , c2 , c3 and truck t, and making sure
that c3 ends up at F . The cars may only use city roads
(thin edges), the truck may only use the highway (thick
edge).
A
F
D
D
E
E
C
G
(b)
in t
in c₁
• A = {a1 , . . . , aN } is a finite set of actions, where
each action a is a pair hpre(a), eff(a)i of partial assignments to V called preconditions and effects, respectively. Each action a ∈ A is associated with a
non-negative real-valued cost C(a).
at A
at B
at C
at D
in c₂
at E
at F
at G
in c₃
(c)
An action a is applicable in a state s ∈ dom(V ) iff
s[v] = pre(a)[v] whenever pre(a)[v] is specified. Applying a changes the value of v to eff(a)[v] if eff(a)[v]
is specified. In this work we focus on cost-optimal
(also known as sequentially optimal) planning in which
∗
the
P task is to find a plan ρ ∈ A for Π minimizing
a∈ρ C(a).
Across the paper we use a slight variation of a Logistics example of Helmert (2006). This example is depicted in Figure 1, and in SAS+ it has
Figure 2: (a) Causal graph; (b) DTGs (labels omitted)
of c1 and c2 (left), t (centre), and c3 (right); (c) DTG of
p1 and p2 .
In what follows, for each v ∈ V , by pred(v) and
succ(v) we refer to the sets of all immediate predecessors and successors of v in CG(Π). Likewise, for any
0
partial assignment x to V , and any V 0 ⊆ V , by x[V ]
we refer to the projection of x onto V 0 .
V = {p1 , p2 , c1 , c2 , c3 , t}
dom(p1 ) = dom(p2 ) =
= {A, B, C, D, E, F, G, c1 , c2 , c3 , t}
dom(c1 ) = dom(c2 ) = {A, B, C, D}
dom(c3 ) = {E, F, G}
dom(t) = {D, E}
I = {p1 ←[ C, p2 ←[ F, t ←[ E,
c1 ←[ A, c2 ←[ B, c3 ←[ G}
G = {p1 ←[ G, p2 ←[ E, c3 ←[ F },
A = {a1 , . . . , a70 },
Definition 3 Let Π = hV, A, I, Gi be a SAS+ problem, and let v ∈ V . The domain transition graph
DTG(v, Π) of v in Π is an arc-labeled digraph over
the nodes dom(v) such that an arc (ϑ, ϑ0 ) belongs to
DTG(v, Π) iff there is an action a ∈ A with eff(a)[v] =
ϑ0 , and either pre(a)[v] is unspecified or pre(a)[v] = ϑ.
In that case, the arc (ϑ, ϑ0 ) is labeled with pre(a)[V \{v}]
and C(a).
Figure 2 depicts the causal and (labels omitted) domain transition graphs for our running example problem Π. Here we note (and later exploit) that Π belongs
to the class of unary-effect, 1-dependent SAS+ problems (Katz & Domshlak 2007), that is, for all a ∈ A,
we have (i) |eff(a)| = 1, and (ii) if eff(a) is specified
0
for V 0 ⊆ V , then |pre(a)[V \V ] | ≤ 1.
where the actions correspond to all possible loads and
unloads, plus single-segment movements of the vehicles. For instance, if action a captures loading p1 into
c1 at C, then pre(a) = {p1 ←[ C, c1 ←[ C}, and
eff(a) = {p1 ←[ c1 }.
From PDBs to Structural Patterns?
Definition 2 The causal graph CG(Π) = (V, E) of a
SAS + problem Π = hV, A, I, Gi is a digraph over the
nodes V . An arc (v, v 0 ) belongs to CG(Π) iff v 6= v 0
and there exists an action a ∈ A such that eff(a)[v 0 ],
and either pre(a)[v] or eff(a)[v] are specified.
Given a problem Π = hV, A, I, Gi, each subset of vari0
ables V 0 ⊆ V defines a pattern abstraction Π[V ] =
0
[V 0 ] [V 0 ]
[V 0 ]
hV , A , I , G i by intersecting the initial state,
the goal, and all the actions’ preconditions and effects
2
with V 0 (Edelkamp 2001)1 . The idea behind the PDB
heuristics is elegantly simple. First, we select a (relatively small) set of subsets V1 , . . . , Vm of V such that,
for 1 ≤ i ≤ m,
(a) Π[Vi ] is an over-approximating abstraction of Π,
(b) the size of Vi is sufficiently small to perform reachability analysis in Π[Vi ] by an (either explicit or
symbolic) exhaustive search.
palette can still be extended, and such extensions may
allow us to materialize the idea of structural patterns
heuristics.
Causal Graph Structural Patterns
We begin with providing a syntactically slight, yet semantically very important for us generalization of the
mechanism for constructing disjoint decompositions of
planning problems along subsets of their state variables.
Let h[Vi ] (s) be the optimal cost of achieving the abstract goal G[Vi ] from the abstract state s[Vi ] . To obtain an admissible heuristic, if the set of abstract problems Π[V1 ] , . . . , Π[Vk ] satisfy certain requirements of
disjointness (Felner, Korf, & Hanan 2004; Edelkamp
2001),
the PDB heuristic can be set to h(s) =
Pm [V
i]
(s).
Otherwise, one can set h(s) =
i=1 h
m
maxi=1 h[Vi ] (s) (Holte et al. 2006).
The Achilles heel of the PDB heuristics is that each
pattern (that is, each selected subset of variables Vi ) is
required to be small so that reachability analysis in Π[Vi ]
could be done by exhaustive search. In short, computing h[Vi ] (s) in polynomial time requires satisfying
|Vi | = O(log |V |). Note that this constraint implies
an inherent scalability limitation of the PDB heuristics.
As the problems of interest grow, limiting patterns to
logarithmic dimensionality will unavoidably make them
less and less informative with respect to the original
problems, and this unless the domain forces its problem
instances to consist of small, loosely-coupled subproblems that can be captured well by individual patterns.
However, pattern databases are not necessarily the
only way to proceed. In principle, given a SAS+ problem Π = hV, A, I, Gi, one can select a (relatively
small) set of subsets V1 , . . . , Vm of V such that, for
1 ≤ i ≤ m,
(a) Π[Vi ] is an over-approximating abstraction of Π,
Definition 4 Let Π = hV, A, I, Gi be a SAS+ problem,
and let V = {V1 , . . . , Vm } be a set of some subsets of
V . A disjoint decomposition of Π over V is a set of
SAS + problems Π = {Π1 , . . . , Πm }, such that
(1) For each Πi = hVi , Ai , Ii , Gi i, we have
(a) Ii = I [Vi ] , Gi = G[Vi ] , and
def
(b) if a[Vi ] = hpre(a)[Vi ] , eff(a)[Vi ] i, then
Ai = {a[Vi ] | a ∈ A ∧ eff(a)[Vi ] 6= ∅}.
(2) Each a ∈ A satisfies
C(a) ≥
m
X
Ci (a[Vi ] ).
(1)
i=1
Definition 4 generalizes the idea of “all-or-nothing”
action cost partitioning from the literature on PDBs disjoining to arbitrary action cost partitioning. In short,
the original cost of each action is distributed this or another way among the “representatives” of that action in
the subproblems, with Eq. 1 being the only constraint
posed on this cost distribution.
Proposition 1 For any SAS+ problem Π
=
hV, A, I, Gi, any set of V ’s subsets V = {V1 , . . . , Vm },
and any disjoint
decomposition of Π over V, we have
Pm
h∗ (I) ≥ i=1 h∗i (I [Vi ] ).
(b) the reachability analysis in Π[Vi ] is tractable (not
necessarily due to the size of but) due to the specific
structure of Π[Vi ] .
What is important here is that the second requirement
can be satisfied even if the size of each selected pattern
Vi is Θ(|V |).
A priori, this generalization of the PDB idea to structural patterns is appealing as it allows using patterns
of unlimited dimensionality. The pitfall, however, is
that such structural patterns correspond to tractable
fragments of cost-optimal planning, and the palette of
such known fragments is extremely limited (Bäckström
& Nebel 1995; Bylander 1994; Jonsson & Bäckström
1998; Jonsson 2007). Next, however, we show that this
Proof sketch: If ρ = a1 · a2 · . . . · asP
is a cost-optimal
s
plan for Π, then h∗ (I) = C(ρ) =
i=1 C(ai ). For
[V
]
[V
]
[V ]
i
i
1 ≤ i ≤ m, let ρ[Vi ] = a1 · a2 · . . . · as i . From (1)
in Definition 4, we have ρ[Vi ] being a (not necessary optimal) plan for Πi , and thus h∗i (I [Vi ] ) ≤ Ci (ρ[Vi ] ) =
Ps
[Vi ]
From (2) in Definition 4 we then
j=1 Ci (aj ).
Pm ∗ [Vi ]
Pm Ps
[Vi ]
=
have
) ≤
j=1 Ci (aj )
i=1 hi (I
i=1
Ps Pm
P
[Vi ]
s
∗
j=1 C(aj ) = h (I).
j=1
i=1 Ci (aj ) ≤
Although disjoint decomposition over subsets of
variables is rather powerful, it is too general for our
purposes because it does not account for any structural requirements one may have for the abstract problems. For instance, focusing on the causal graph, when
we project the problem onto subsets of its variables,
1
An additional way to define pattern abstractions has been
recently suggested by Hoffmann et al. (2006). However, the
difference between the two approaches is not critical for our
discussion here.
3
as an admissible heuristic for Π. Moreover, given a set
G = {G1 , . . . , Gm } of subgraphs of the causal graph
CG(Π), these heuristic estimates for structural patterns
{ΠG1 , . . . , ΠGm } are additive if holds a certain property
given by Definition 6.
we leave all the causal-graph connections between the
variables in each projected problem untouched. However, as here we aim at receiving abstract problems with
causal graphs of specific structure, we should somehow
project the original problem onto a subgraph (or a set
of subgraphs) of the causal graph respecting such structural requirements. This leads us to introducing what
we call causal graph structural patterns.
Definition 6 Let Π = hV, A, I, Gi be a SAS+ problem,
and G = {G1 , . . . , Gm } be a set of subgraphs of the
causal graph CG(Π). A disjoint CGSP decomposition
of Π over G is a set of CGSPs Π = {ΠG1 , . . . , ΠGm }
such that each action a ∈ A satisfies
Definition 5 Let Π = hV, A, I, Gi be a SAS+ problem,
and G = (VG , EG ) be a subgraph of the causal graph
CG(Π). A causal-graph structural pattern (CGSP)
ΠG = hVG , AG , IG , GG i is a SAS+ problem defined as
follows.
1. IG = I [VG ] , GG = G[VG ] ,
S
2. AG =
=
a∈A AG (a), where each AG (a)
1
l(a)
{a , . . . , a }, l(a) ≤ |eff(a)|, is a set of actions
over VG such that
(a) for each ai ∈ AG (a), if eff(ai )[v 0 ], and either eff(ai )[v] or pre(ai )[v] are specified, then
(v, v 0 ) ∈ G.
(b) for each (v, v 0 ) ∈ G, and each ai ∈ AG (a), if
eff(ai )[v 0 ] is specified, then either eff(ai )[v] or
pre(ai )[v] is specified as well.
(c) for each s ∈ dom(VG ), if pre(a)[VG ] ⊆ s, then
the action sequence ρ = a1 · a2 · . . . · al(a) is
applicable in s, and if applying ρ in s results in
s0 ∈ dom(VG ), then s0 \ s = eff(a).
Pl(a)
(d) C(a) ≥ i=1 CG (ai ).
C(a) ≥
CGi (a0 ),
(2)
Proposition 3 For any SAS+ problem Π
=
hV, A, I, Gi, any set of CG(Π)’s subgraphs
G = {G1 , . . . , Gm }, and any disjointP
CGSP decompom
sition of Π over G, we have h∗ (I) ≥ i=1 h∗i (IGi ).
Proof sketch: If ρ = a1 · a2 · . . . · asP
is a cost-optimal
s
plan for Π, then h∗ (I) = C(ρ) =
j=1 C(aj ). By
Definition 5, for 1 ≤ i ≤ m, we have
l(a1 )
ρGi = a11 · . . . · a1
s)
· . . . · a1s · . . . · al(a
s
being a (not necessary cost-optimal) plan for the CGSP
ΠGi . Given that, we have h∗i (IGi ) ≤ CGi (ρGi ) =
Ps P
0
j=1
a0 ∈AGi (aj ) CGi (a ). From Eq. 2, for 1 ≤ j ≤ s,
P
m P
we have i=1 a0 ∈AG (aj ) CGi (a0 ) ≤ C(aj ), and thus
Pm ∗
Pm i Ps P
0
i=1 hi (IGi ) ≤
i=1
j=1
a0 ∈AGi (aj ) CGi (a ) =
Ps Pm P
P
s
0
j=1
i=1
a0 ∈AGi (aj ) CGi (a ) ≤
j=1 C(aj ) =
∗
h (I).
Relying on Proposition 3, we can now decompose
any given problem Π into a set of tractable CGSPs
Π = {ΠG1 , . . . , ΠGm }, solve all these CGSPs in polynomial time, and derive an admissible heuristic for Π.
Note that (similarly to Def. 4) Def. 6 leaves the decision
about the actual partition of the action costs rather open.
In our discussion henceforth, we consider the (kind of
“least-committing”) uniform action-cost partitioning in
which the action cost is equally split among its nonredundant projections in Π.
Proposition 2 For any SAS+ problem Π
=
hV, A, I, Gi, and any subgraph G = (VG , EG ) of
the causal graph CG(Π), at least one CGSP ΠG can be
efficiently constructed from Π.
The detailed proof of Proposition 2 is omitted here,
but one possible construction of the action sets AG (a) is
as follows—if {v1 , . . . , vk } is the subset of VG affected
by a, then AG (a) = {a1 , . . . , ak } with
(
eff(a)[v],
unspecified,
8
>
eff(a)[v],
>
>
<
pre(a)[v],
pre(ai )[v] =
>
pre(a)[v],
>
>
:unspecified,
X
i=1 a0 ∈AGi (a)
Corollary 1 For any SAS+ problem Π = hV, A, I, Gi,
and any subgraph G = (VG , EG ) of the causal graph
CG(Π), we have CG(ΠG ) = G.
eff(ai )[v] =
m
X
Disjoint Fork-Decompositions
v = vi
otherwise
We now introduce certain decomposition of SAS+ planning problems along their causal graphs. In itself, this
decomposition does not lead to structural patterns abstractions, yet it provides an important building block
on our way towards them.
v = vj ∧ j < i ∧ (vj , vi ) ∈ EG
v = vj ∧ j > i ∧ (vj , vi ) ∈ EG
v = vi
otherwise
Definition 7 Let Π = hV, A, I, Gi be a SAS+ problem.
The fork-decomposition
Now, given a SAS+ problem Π and a subgraph G of
CG(Π), if the structural pattern ΠG can be solved costoptimally in polynomial time, we can use its solution
Π = {ΠG f , ΠG if }v∈V
v
4
v
c₁
p₁
c₁
c₂
p₂
c₃
Unfortunately, despite the seeming simplicity of the
problems in Π, turns out that fork-decompositions
by themselves do not fit the requirements of the
structural patterns framework. The causal graphs
of {Πfc1 , Πfc2 , Πfc3 , Πft } and {Πifp1 , Πifp2 } form directed
forks and inverted forks, respectively (see Figure 3),
and, in general, the number of variables in each such
problem is Θ(n). Unfortunately for us, Domshlak
and Dinitz (2001) show that even non-optimal planning
for SAS+ problems with fork and inverted fork causal
graphs is NP-complete. Moreover, even if the domaintransition graphs of all the projections are strongly connected, optimal planning for forks and inverted forks
remain NP-hard (see Helmert (2003) and (2004) for
the respective results). However, in the next section
we show that this is not the end of the story on forkdecompositions.
t
p₁
CG(Πifp1 )
CG(Πfc1 )
Figure 3: Causal graphs of a fork and an inverted fork
structural patterns of the running example.
is a disjoint CGSP decomposition of Π over subgraphs
G = {Gvf , Gvif }v∈V where, for v ∈ V ,
[
VGvf = {v} ∪ succ(v), EGvf =
{(v, u)}
u∈succ(v)
VGvif = {v} ∪ pred(v), EGvif =
[
{(u, v)}
u∈pred(v)
Illustrating Definition 7, let us consider the (uniform) fork-decomposition of the problem Π from our
running example, assuming all the actions in Π have
the same unit cost. After eliminating from G all the
singletons2 , we get G = {Gcf 1 , Gcf 2 , Gcf 3 , Gtf , Gpif1 , Gpif2 }.
Considering the action sets of the problems in Π,
each original driving action is present (by its projections) in some three problems in Π, while each
load/unload action is present in some five such problems. For instance, the projections of the action “drivec1 -from-A-to-D” are present in {Πfc1 , Πifp1 , Πifp2 }, and
the projections of the action “load-p1 -into-c1 -at-A” are
present in {Πfc1 , Πfc2 , Πfc3 , Πft , Πifp1 }. Since our forkdecomposition is uniform, the cost of each driving
(load/unload) action projection is set to 1/3 (respectively, to 1/5).
From Proposition 3 we have that the sum of costs of
solving the problems Π, that is,
X
h! =
h∗Πfv + h∗Πifv ,
(3)
Meeting Structural and Domain
Abstractions
While hardness of optimal planning for problems with
fork and inverted fork causal graphs put a shadow on
relevance of fork-decompositions, closer look at the
proofs of these hardness results of Domshlak and Dinitz
(2001) and Helmert (2003; 2004) reveals that these
proofs in particular rely on root variables having large
domains. It turns out that this dependence is not incidental, and Propositions 4 and 5 below present some
significant islands of tractability within these structural
fragments of SAS+ .
Proposition 4 Given a SAS+ problem Π
=
hV, A, I, Gi inducing a fork causal graph with a
root r ∈ V , if (i) |dom(r)| = 2, or (ii) for all v ∈ V ,
we have |dom(v)| = O(1), then finding a cost-optimal
plan for Π is poly-time.
Proof sketch: First, if |dom(r)| = 2, let dom(r) =
{0, 1}, where I[r] = 0. Let σ(r) be a 0/1 sequence of
length 1 + d, where d = maxu∈succ(r) |dom(u)|, and,
for 1 ≤ i ≤ |σ(r)|,
0, i is odd,
σ(r)[i] =
1, i is even
v∈V
is an admissible estimate of h∗ . The question now is
how good this estimate is. The optimal cost of solving
our problem is 19, and
h! = h∗Πfc + h∗Πfc + h∗Πfc + h∗Πf + h∗Πifp + h∗Πfp =
1
= 58 +
= 15
8
5
2
+ ( 85 +
t
3
6
3)
+ ( 58 +
2
3)
+ ( 65 +
1
9
3)
+ ( 52 +
2
4
3)
Finally, let ∗ [σ(r)] be the set of all non-empty prefixes
of σ(r) if G[r] is unspecified, and the set of all nonempty prefixes of σ(r) ending with G[r] otherwise.
(1) For each u ∈ succ(r), let DT G0u and DT G1u be
the subgraphs of DTG(u, Π) obtained by removing from the latter all the arcs labeled with 1 and
0, respectively. For each u ∈ succ(r), and each
x, y ∈ dom(u), compute the shortest (that is, costminimal) paths from x to y in DT G0u and DT G1u .
(2) For each σ ∈ ∗ [σ(r)], and each u ∈ succ(r),
build a layered digraph Lu (σ) with |σ| + 1 layers
=
(4)
Taking as a basis for comparison the seminal hmax
and h2 heuristics (Bonet & Geffner 2001; Haslum &
Geffner 2000), we have hmax = 8 and h2 = 13. Hence,
it appears that using the additive CGSP heuristic h! is
at least promising.
2
If the causal graph CG(Π) is connected and n > 1, then
this elimination is not lossy.
5
step (1). For each u ∈ pred(r), let Su denote the sequence of values from dom(u) that is required by the
prevail conditions of the actions along ρ↓r . If so, for
each u ∈ pred(r), ρ↓u corresponds to a path from I[u]
to G[u] in DTG(u, Π), traversing the values (= nodes)
in Su in the order required by Su . And a plan for Π
generated in (3) consists of minimal such paths for all
u ∈ pred(r). Therefore, at least one of the plans generated in (3) will be cost-optimal for Π.
L0 , . . . , L|σ| , where L0 consists of only I[u], and
for 1 ≤ i ≤ |σ|, Li consists of all nodes reachable
from the nodes Li−1 in DT G0u if i is odd, and in
DT G1u if i is even. For each x ∈ Li−1 , y ∈ Li ,
Lu (σ) contains an arc (x, y) weighted with the cost
of the cost-minimal path from x to y in DT G0u if i
is odd, and in DT G1u if i is even.
(3) For each σ ∈ ∗ [σ(r)], let |σ| = s. A candidate
plan ρσ for Π is constructed as follows.
(a) For each u ∈ succ(r), find a cost-minimal path
from I[u] to G[u] in Lu (σ). Note that the i-th
edge on this path (taking us from x ∈ Li−1 to
y ∈ Li ) corresponds to the cost-minimal path
from x to y in either DT G0u or DT G1u . Let us
denote this path from x to y by Sui .
(b) Set ρσ = S 1 · aσ[2] · S 2 · . . . · aσ[s] · S s , where
sequence S i is obtained by an arbitrary merge
of the sequences {Sui }u∈succ(r) , and aα is the
action that changes the value of r to α.
(4) Set and return ρ = argminσ∈∗ [σ(r)] C(ρσ ).
Propositions 4 and 5 allow us to meet between the
fork-decompositions and tractable structural patterns at
least for 1-dependent planning domains such as Logistics3 . The basic idea is to further abstract each CGSP
in fork-decomposition of Π by abstracting domains of
its variables to meet the requirements of the tractable
fragments.
Definition 8 Let Π = hV, A, I, Gi be a SAS+ problem,
v ∈ V , and let Φ = {φ1 , . . . , φk } be a set of mappings from dom(v) to some sets Γ1 , . . . , Γk . A disjoint
domain decomposition of Π over Φ is a set of SAS+
problems Π = {Π1 , . . . , Πk }, such that
(1) For each Πi = hVi , Ai , Ii , Gi i, we have4
(a) Ii = φi (I), Gi = φi (G), and
It is not hard to verify that the complexity of the
above procedure is polynomial in the description size
of Π, and that the constructed plan ρ is cost-optimal.
We omit here the proof of the second case, and only
note that a rather simple analysis upper-bounds the time
d+2
complexity for this case by Θ(dd +2d+2 ) = O(1).
While for practice this “constant” bound is rather sarcastic, here we have not tried to be complexity-optimal
either. Finding more realistic bounds for this concrete
problem is definitely of interest.
def
(b) if φi (a) = hφi (pre(a)), φi (eff(a))i, then
Ai = {φi (a) | a ∈ A ∧ φi (eff(a)) 6⊆ φi (pre(a))}.
(2) Each a ∈ A satisfies
C(a) ≥
k
X
Ci (φi (a)).
(5)
i=1
Proposition 5 Given a 1-dependent SAS+ problem
Π = hV, A, I, Gi inducing an inverted fork causal
graph with a root r ∈ V , if |dom(r)| = O(1), then
finding a cost-optimal plan for Π is poly-time.
Proposition 6 For any SAS+ problem Π
=
hV, A, I, Gi, any v ∈ V , any set of domain abstractions
Φ = {φ1 , . . . , φk }, and any disjoint domain decompoPk
sition of Π over Φ, we have h∗ (I) ≥ i=1 h∗i (φi (I)).
Proof sketch: Let |dom(r)| = d. A naive algorithm
that finds a cost-optimal plan for Π in time Θ(dd+1 +
|Π|3 ) = Θ(|Π|3 ) is as follows.
(1) Create all Θ(dd ) cycle-free paths from I[r] to G[r]
in DTG(r, Π).
(2) For each u ∈ pred(r), and each x, y ∈ dom(u),
compute the cost-minimal path from x to y in
DTG(u, Π).
(3) For each path in DTG(r, Π) generated in step (1),
construct a plan for Π based on that path for r, and
the shortest paths computed in (2).
(4) Take minimal cost plan from (3).
The time complexity of this algorithm is Θ(|Π|3 ), and
it finds a optimal plan, if such exist. The latter can be
shown as follows. For each cost-optimal plan ρ, it is
easy to verify that ρ↓r is one of the paths generated in
Targeting tractability of the causal graph structural
patterns, we connect between fork-decompositions and
domain decompositions as in Definition 8. Given a
fork-decomposition Π = {Πfv , Πifv }v∈V of Π,
• For each Πfv ∈ Π,
(a) Associate the root r of CG(Πfv ) with mappings
Φv = {φv,1 , . . . , φv,kv }, kv = O(poly(|Π|)), and
all φv,i : dom(r) → {0, 1}.
3
If the problem of interest falls outside this problem class,
then it should (and could) be first abstracted to a problem
within that class. The palette of concrete choices for such an
abstraction is rather wide, and currently we investigate their
relative pros and cons.
4
For a partial assignment S on V , φi (S) denotes the abstracted partial assignment obtained from S by replacing S[v]
(if any) with φi (S[v]).
6
v
(b) Disjointly decompose Πfv into Πfv = {Πfv,i }ki=1
over Φv .
• For each
Πifv
in t
in c₁
∈ Π,
(a) Associate the root r of CG(Πifv ) with mappings
Φ0v = {φ0v,1 , . . . , φ0v,kv0 }, kv0 = O(poly(|Π|)), all
φ0v,i : dom(r) → {0, 1, . . . , bv,i }, bv,i = O(1).
at A
at B
at C
at D
at E
in c₂
at F
at G
in c₃
kv0
(b) Disjointly decompose Πifv into Πifv = {Πifv,i }i=1
over Φ0v .
Figure 4: Binary-range domain abstractions for Πp1 .
The values within and outside each dashed curve are
mapped to 0 and 1, respectively.
For 1-dependent problems Π, from Proposition 3
and 6 we then have


kv0
kv
X X
X

h! =
h∗Πf +
h∗Πif ,
(6)
v,i
v∈V
i=1
ditioned by c1 and c2 . If so, then no information is lost5
if we
v,i
i=1
1. remove from Πp1 ,1 both the variables c3 and t, and
the actions changing (only) these variables, and
being an admissible estimate of h∗ for Π, and from
Propositions 4-5 we have that h! is also computable in
polynomial time. The question is, however, how further
abstracting our fork-projections affects the informativeness of the heuristic estimate. As we show later, the
answer is somewhat surprising.
Let us again use our running example to illustrate
the mixture of structural and domain projections as outlined above. To begin with an extreme setting of domain
abstractions, first, let the domain abstractions for roots
of both forks and inverted forks be to binary domains.
Among multiple options for choosing the mapping sets
{Φv } and {Φ0v }, here we use a simple choice of distinguishing between different values of each variable v
on the basis of their distance from I[v] in DTG(v, Π).
Specifically, for each v ∈ V , we set Φv = Φ0v , and, for
each value ϑ ∈ dom(v),
0, d(I[v], ϑ) < i
φv,i (ϑ) = φ0v,i (ϑ) =
1, otherwise
2. redistribute the (fractioned) cost of the removed actions between all other representatives of their originals in Π.
The latter revision of the action cost partitioning is obtained directly by replacing the cost-partitioning steps
corresponding to Eqs. 2 and 5 by a single, joint action cost partitioning applied over the final projections
S
f
if
v∈V (Πv ∪ Πv ) and satisfying

kv
X X
C(a) ≥

v∈V
X
f
Cv,i
(φv,i (a0 )) +
i=1 a0 ∈AG f (a)
v

kv0
X
X
(7)

if
Cv,i
(φ0v,i (a0 ))
i=1 a0 ∈AG if (a)
v
Overall, computing h! as in Eq. 6 under these “all
binary range domain abstractions” provides us with
7
h! = 12 15
, and knowing that the original costs are
all integers we can safely adjust it to h! = 13. Hence,
even under most severe domain abstractions as above,
h! on our example problem does not fall from h2 .
Let us now slightly relax our domain abstractions for
the roots of the inverted forks to be to the ternary range
{0, 1, 2}. While mappings {Φv } stay as before, {Φ0v } is
set to

0, d(I[v], ϑ) < 2i − 1
∀ϑ ∈ dom(v) : φ0v,i = 1, d(I[v], ϑ) = 2i − 1

2, d(I[v], ϑ) > 2i − 1
For instance, the problem Πfc1 is decomposed (see Figure 2b) into two problems, Πfc1 ,1 and Πfc1 ,2 , with the
0/1 abstract domain of c1 corresponding to the partitions {A}/{B, C, D} and {A, D}/{B, C} of dom(c1 ),
respectively. The (interesting for certain reasons below) problem Πifp1 is decomposed (see Figure 2c) into
six problems Πfp1 ,1 , . . . , Πfp1 ,6 along the abstractions of
dom(p1 ) depicted in Figure 4.
Now, given the decomposition of Π over forks and
{Φv , Φ0v }v∈V as above, consider the problem Πp1 ,1 , obtained from projecting Π onto the inverted fork of p1
and then abstracting dom(p1 ) using
0, ϑ ∈ {C}
φp1 ,1 (ϑ) =
1, ϑ ∈ {A, B, D, E, F, G, c1 , c2 , c3 , t}
5
One of the reasons why no information is lost is the fact
that we keep either fork or inverted fork for each variable of
Π. In any event, here we omit further formal justifications of
this optimization step.
It is not hard to verify that, from the original actions
affecting p1 , in Πp1 ,1 we are left only with actions con7
in t
in c₁
at A
at B
at C
at D
in t
in c₁
at E
in c₂
at F
in c₃
dom(p1 ) in Πfp1 ,1
at G
at A
at B
at C
at D
in t
in c₁
at E
in c₂
at F
in c₃
dom(p1 ) in Πfp1 ,2
at G
at A
at B
at C
at D
at E
in c₂
at F
at G
in c₃
dom(p1 ) in Πfp1 ,3
Figure 5: Ternary-range domain abstractions for Πp1 ; values that mapped to the same abstract value are shown as
nodes with the same color and borderline.
For instance, the problem Πifp1 is decomposed now into
three problems Πfp1 ,1 , . . . , Πfp1 ,3 along the abstractions
of dom(p1 ) depicted in Figure 5.
Applying now the same computation of h! as in
Eq. 6 over our new set of domain abstractions gives
h! = 15 12 , which, again, can be safely adjusted to
h! = 16. Note that this value is higher than h! =
15 obtained using the fork-decomposition alone (as in
Eq. 3). At first view, this outcome may seem counterintuitive as the domain abstractions are applied over the
fork-decomposition. The explanation, however, is that
(as shown above) the domain abstractions for the roots
of inverted forks may create independence between the
roots and their preconditioning variables. And exploiting such domain-abstraction specific independence relations leads to more targeted action cost partitioning as
in Eq. 7.
References
Bäckström, C., and Nebel, B. 1995. Complexity results for SAS+ planning. Comp. Intell. 11(4):625–655.
Bonet, B., and Geffner, H. 2001. Planning as heuristic
search. AIJ 129(1–2):5–33.
Bylander, T. 1994. The computational complexity of
propositional STRIPS planning. AIJ 69(1-2):165–204.
Culberson, J., and Schaeffer, J. 1998. Pattern
databases. Comp. Intell. 14(4):318–334.
Domshlak, C., and Dinitz, Y. 2001. Multi-agent offline coordination: Structure and complexity. In ECP,
277–288.
Edelkamp, S. 2001. Planning with pattern databases.
In ECP, 13–34.
Felner, A.; Korf, R. E.; and Hanan, S. 2004. Additive
pattern database heuristics. JAIR 22:279–318.
Haslum, P., and Geffner, H. 2000. Admissible heuristics for optimal planning. In ICAPS, 140–149.
Haslum, P.; Bonet, B.; and Geffner, H. 2005. New admissible heuristics for domain-independent planning.
In AAAI, 1163–1168.
Helmert, M. 2003. Complexity results for standard
benchmark domains in planning. AIJ 146(2):219–262.
Helmert, M. 2004. A planning heuristic based on
causal graph analysis. In ICAPS, 161–170.
Helmert, M. 2006. The Fast Downward planning system. JAIR 26:191–246.
Hoffmann, J.; Sabharwal, A.; and Domshlak, C. 2006.
Friends or foes? An AI planning perspective on abstraction and search. In ICAPS, 294–303.
Holte, R. C.; Felner, A.; Newton, J.; Meshulam, R.;
and Furcy, D. 2006. Maximizing over multiple pattern databases speeds up heuristic search. AIJ 170(1617):1123–1136.
Jonsson, P., and Bäckström, C. 1998. State-variable
planning under structural restrictions: Algorithms and
complexity. AIJ 100(1–2):125–176.
Jonsson, A. 2007. The role of macros in tractable
planning over causal graphs. In IJCAI, 1936–1941.
Katz, M., and Domshlak, C. 2007. Structural patterns
of tractable sequentially-optimal planning. In ICAPS.
Pearl, J. 1984. Heuristics — Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley.
Discussion
We presented a structural-patterns generalization of
PDB abstractions that is based on projecting the problem in hand to provably tractable fragments of optimal
planning. The key motivation behind this generalization is to alleviate the requirement for the projections to
be of a low dimensionality. To show the practical potential of the basic idea, we introduced and looked into
a concrete structural patterns abstraction based on decomposing the problem into a set of fork and inverted
fork components of its causal graph, combined with abstracting the domains of certain variables within these
individual components.
The basic principles of the structural patterns framework motivate further research in numerous directions,
and in particular, in (1) discovering new islands of
tractability of optimal planning, and (2) translating
and/or abstracting the general planning problems into
such islands. In our ongoing work we aim at pursuing both these directions by “mining” the tractable fragments of optimal SAS+ planning, and, in particular, of
planning with unary-effect actions (Katz & Domshlak
2007), and by performing formal and empirical analysis of alternative schemes for abstracting general SAS+
problems to meet the specification of such islands of
tractability.
8