### Estimation of the states of a dynamical system described by linear equations with unknown parameters

```Ukrainian Mathematical Journal, Vol. 61, No. 2, 2009
STATE ESTIMATION FOR A DYNAMICAL SYSTEM DESCRIBED
BY A LINEAR EQUATION WITH UNKNOWN PARAMETERS
S. M. Zhuk
UDC 519.962.22
We investigate the state estimation problem for a dynamical system described by a linear operator equation with unknown parameters in a Hilbert space. In the case of quadratic restrictions on
the unknown parameters, we propose formulas for a priori mean-square minimax estimators
and a posteriori linear minimax estimators. A criterion for the finiteness of the minimax error is
formulated. As an example, the main results are applied to a system of linear algebraic–differential equations with constant coefficients.
Introduction
One of the main problems in contemporary applied mathematics is the state estimation problem for a dynamical system described by linear equations with unknown parameters. This problem belongs to the broad
class of problems known as inverse problems under conditions of indeterminacy. Mathematically, this class of
problems can be described as follows: On the basis of a given element (observations of a state, output measurements, etc.) of a certain functional space, find an estimator for an element l(θ) under the condition that θ satisfies the relation g(θ) = 0. Problems of the determination of l(θ) are informative if the equation g(θ) = 0
has the set of solutions and y = C(θ) for a certain element θ of this set. Thus, the estimation problem can be
Ò
formulated in this case as follows: On the basis of a given y = C(θ) , θ ∈Θ , y ∈Y , find an estimator l(θ) for
the element l(θ) under the condition that g(θ) = 0 and C( ⋅ ) and l( ⋅ ) are known functions. Note that, in
the case where the equation y = C(θ) has a unique solution θ̂ , the estimation problem degenerates in the
sense that the expression l (θ̂) is the unique estimator for l(θ) .
We call an estimation problem linear if Θ and Y are linear spaces and C( ⋅ ) and l(θ) are linear mappings. One of the classes often considered is the class of linear problems defined by the functions
C(θ) = H ϕ + D η ,
g(θ) = L ϕ + B f ,
θ = ( x, f , η) ⊂ X × F × Y,
where H, D, L, and B are linear operators. We call a linear estimation problem an estimation problem under
conditions of indeterminacy if D ≠ 0, L or B is not equal to zero, and if B = 0, then N ( L ) = { ϕ : L ϕ = 0 } ≠
{ 0 }. Note that the type of indeterminacy determines the choice of a method for the solution of the estimation
problem: if f and η are realizations of random elements, then it is natural to use the stochastic approach. In
this case, one needs a priori information on characteristics of the distribution of random elements. Assume that
indeterminacy takes place if the distribution of random elements is partially unknown or some deterministic parameters are partially unknown. For details of various statements of estimation problems under conditions of indeterminacy (jointly known as the theory of guaranteed estimation) for different l, L, H , B, and D and for
special spaces, see, e.g., [1]. For the classical results of the theory of guaranteed estimation, see [2–4].
Shevchenko Kyiv National University, Kyiv, Ukraine.
Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 61, No. 2, pp. 178–194, February, 2009. Original article submitted May 22,
2008.
214
0041–5995/09/6102–0214
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
215
In the classical theory, the assumption of the existence of the bounded inverse of the operator of a system is
essential. The linear estimation problem under conditions of indeterminacy for equations with noninjective operator in an abstract Hilbert space was studied, in particular, in [5], where estimators were found in the case of
quadratic restrictions on the unknown parameters. The developed method essentially uses the finite dimensionality of the kernel and cokernel of the operator of the system and its normal solvability. Therefore, estimators
can be written, in particular, for boundary-value problems for systems of normal linear ordinary differential
equations. In [6], a criterion was proposed for the solvability of Noetherian boundary-value problems for linear
algebraic–differential equations with varying coefficients (the term “descriptor systems” is also widely used in
the literature) under the condition that algebraic–differential equations can be reduced to the central canonical
form [7, p. 57]. In particular, this condition guarantees the unique solvability of the corresponding Cauchy
problem [7, p. 67]. Combining these results with results of [5], one can construct estimators for solutions of
Noetherian boundary-value problems for linear descriptor equations of special structure with unknown parameters. On the other hand, an example of a linear descriptor equation with constant coefficients for which the homogeneous Cauchy problem has only the trivial solution whereas the operator induced by the Cauchy problem
has the nonclosed set of values was given in [8]. The estimation methods proposed in [1–3, 5] cannot be directly
applied to these systems.
The main result of the present paper is a method for the guaranteed estimation of equations with a linear
closed densely defined operator in an abstract Hilbert space. The main advantage of this method is that it does
not require the Noetherian property of an operator system and its normal solvability. This method is the development of our approach proposed in [9, 10] for linear algebraic–differential equations in spaces of square summable vector functions; it generalizes the results of [1–3] to the case of linear equations with unbounded operator. For Noetherian equations, the obtained representations of estimators [11] coincide with those described in
[5]. As an example, we apply the proposed method to the state estimation problem for a linear algebraic–differential equation with constant coefficients. Moreover, the reduction to the central canonical form is not required.
We now introduce the necessary notation: c(G, ⋅ ) = sup {(z, f ), f ∈G} is the support function of the set
G, δ (G, ⋅ ) is the indicator function of G , dom f = { x ∈ H : f ( x ) < ∞} is the effective set of a function f,
f ∗( x ∗ ) = sup x {( x ∗, x ) − f ( x )} is the Young–Fenchel transformation or the function conjugate to f, cl f = f ∗∗
is the closure of the function f, for eigenfunctions the function f coincides with the lower semicontinuous regularization of f, ( f L )( x ) = f ( L x ) is the image of the function f under the linear operator L, (L∗c ) (u) =
inf {c(G, z ), L∗z = u} is the preimage of the function c(G, ⋅ ) under the operator L∗ , Arginfu f (u) is the collection of the points of minimum of the function f, PL∗ is the operator of orthogonal projection onto R (L∗ ) ,
∂f ( x ) is the subdifferential of the function f at the point x, and ( ⋅, ⋅ ) is the scalar product of a Hilbert space.
Statement of the Problem
Assume that an operator ϕ satisfies the condition Lϕ ∈, and a vector y is defined and associated with
ϕ by the relation
y = Hϕ + η.
(1)
The operators L and H and the set are assumed to be given, and the element η “simulates” indeterminacy
(e.g., it is a random vector). Our aim is to solve the following inverse problem: On the basis of the given y,
Ò
construct the operation of estimation l(ϕ ) of the expression l (ϕ ) and determine the estimation error σ. Let us
formulate this more rigorously.
216
S. M. Z HUK
Let L be a closed operator that maps an everywhere dense subset ( L ) of a Hilbert space H into a Hilbert space F and let H ∈ (H, Y ). We represent the condition Lϕ ∈ in the following equivalent form: Assume that ϕ satisfies the linear operator equation
Lϕ = f,
(2)
where the right-hand side f is a certain unknown element ⊂ F . Thus, we know that one of solutions ϕ of
Eq. (2) for a certain f ∈ is defined by the given vector y up to the element η and the operator H :
H ϕ = y − η . In what follows, we assume that the element η simulates two types of indeterminacy: it denotes a
realization of a random vector with values in Y, mean value zero, and a correlation operator Rη ∈ , where is a given subset of (Y , Y ) , η is a deterministic vector, ( f , η) ∈G , and G is a given subset of F × Y .
Note that the realization of y is determined not only by specific η , H, and f. In the general case, N ( L ) =
{ ϕ ∈ ( L ) : L ϕ = 0 } is a nontrivial linear manifold. Therefore, y = H(ϕ0 + ϕ) + η, where ϕ 0 is an arbitrary element of the unbounded set N ( L ).
Ò
Ò
We set l (ϕ ) = (l, ϕ) . We seek the estimator l(ϕ ) in the class of affine functionals l(ϕ ) = (u, y) + c of
observations. We do not assume here that the operators L and H have bounded inverse operators. Therefore,
small deviations on the right-hand side of (2) and in measurements (1) can lead to an infinitely large estimation
error. Taking into account this remark and the series of indeterminacies indicated above, we construct an operation of estimation on the basis of the minimax approach. This gives a guaranteed estimation error that characterizes the maximum deviation of the estimator from the actual value and is finite for a fairly broad collection of
pairs of operators L and H.
Note that we can consider two statements of the estimation problem: a posteriori statement and a priori
statement. In the case of a priori estimation, in the process of the construction of an operation of estimation we
rely on the “worst” realization y, analyzing all possible correlation operators Rη and the right-hand sides f.
As a result, the optimal estimator is determined only by the direction l and the structure of given sets of restrictions.
Ò
Definition 1. The affine functional l(ϕ ) = (uˆ, ⋅ ) + cˆ determined from the condition
σ (l, uˆ ) = inf σ (l, u) ,
u,c
σ (l , u ) : =
sup
L ϕ ∈, Rη ∈
Ò
M (l (ϕ ) – l(ϕ ))2
is called the a priori mean-square minimax estimator of the expression l (ϕ ) = (l, ϕ). The number
σˆ (l ) =
σ1 2 (l, uˆ ) is called the mean-square minimax error in the direction l.
An a posteriori operation of estimation associates a specific realization of y with the “Chebyshev center”
of the set X y ⊂ H (so-called a posteriori set) of all possible ϕ each of which is consistent with “measured”
y by virtue of (1) and (2):
(L ϕ, y − H ϕ ) ∈ G.
Therefore, we seek this estimator only among the elements of X y . Note that the inclusion (L ϕ, y − H ϕ) ∈ G
implies the inequality y < C, where the constant C is defined by the structure of G. Therefore, in the case
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
217
of a posteriori estimation, there is no reason to assume that the undefined element η is a random process because the inequality Rη < c for the norm of the correlation operator does not guarantee that y < C for a
specific realization η. For this reason, we assume that the undefined element η is deterministic.
Definition 2. The set
X y = {ϕ ∈ ( L ) : (L ϕ, y − H ϕ) ∈G }
is called the a posteriori set, the vector ϕ̂ is called the a posteriori minimax estimator for a vector ϕ in a
direction l if
dˆ (l ) : = inf sup (l, ϕ ) − (l, ψ ) = sup (l, ϕˆ ) − (l, ψ ) ,
ϕ ∈X y ψ ∈X
y
ψ ∈X y
and the expression dˆ (l ) is called the a posteriori minimax error in the direction l.
Main Results
Below, we describe the general form of an a priori mean-square minimax estimator and formulate a criterion for the finiteness of the error of the mean-square minimax estimator.
Proposition 1. Let and be convex closed bounded subsets of F a n d (Y , Y ) , respectively.
For given l ∈ H, the minimax error σˆ (l ) is finite if and only if, for a certain u ∈ Y, one has
l − H ∗u ∈ dom cl (L∗c) ∩ (− 1) dom cl (L∗c) .
For these u and l, one has
σ (l , u ) =
[
]
2
1
cl (L∗c)(l − H ∗u) + cl (L∗c)(− l + H ∗u) + sup ( Rη u, u) ,
4
Rη ∈
(3)
and, furthermore,
R (L∗ ) ⊂ dom cl (L∗c) ⊂ R (L∗ ) .
Ò
If Arginfu σ (l, u) ≠ ∅, then l(ϕ ) = (uˆ, y) + cˆ , where
uˆ ∈Arginfu σ (l, u) ,
ĉ =
(
)
1
cl (L∗c)(l − H ∗uˆ ) − cl (L∗c)(− l + H ∗uˆ ) .
2
Theorem 1. Suppose that is a convex closed bounded symmetric set whose interior contains 0, and
the random element η satisfies the condition
η ∈{η : M ( η, η) ≤ 1} .
218
S. M. Z HUK
Then, for given l ∈H, the minimax error σˆ (l ) is finite if and only if l − H ∗u ∈ R (L∗ ) for a certain
u ∈Y . For these l, there exists a unique mean-square minimax estimator û ∈Ul , which is determined from
the condition
σ (l, uˆ ) = min σ (l, u) ,
u
(4)
∗
2
∗
σ (l, u) = (u, u) + min {c ( , z ), L z = l − H u} .
z
If the sets R ( L ) and H ( N ( L )) are closed, then û is determined from the condition
uˆ − Hp0 ∈ H (∂I 2 ( H ∗uˆ )) ,
Lp0 = 0,
(5)
∗
2
I 2 (w) = min {c ( , z ), L z = PL∗ (l − w)} .
z
Corollary 1. Suppose that
= { f ∈ F : ( f , f ) ≤ 1} ,
η ∈{η : M ( η, η) ≤ 1}
and one of the following conditions is satisfied:
(i) the sets R ( L ) and H ( N ( L )) are closed;
(ii) the set R ( T ) = {[ Lx, Hx ], x ∈ ( L )} is closed.
Then, for l ∈ R (L∗ ) + R (H ∗ ) , and only for these l, the unique minimax estimator û can be represented in the form û = Hp̂, where p̂ is an arbitrary solution of the system
L∗zˆ = l − H ∗Hpˆ ,
(6)
Lp̂ = ẑ .
The mean-square minimax error has the form
σˆ (l ) = (l, pˆ )1/2 .
Corollary 2. Suppose that linear operators L : H F a n d H ∈(H, Y ) satisfy condition (i) or
(ii) of Corollary 1. Then the system of operator equations (6) has a solution zˆ ∈ ( L∗ ) , pˆ ∈ ( L ) if and only
if l = L∗z + H ∗u for certain z ∈ ( L∗ ) and u ∈Y .
Corollary 3. Under the conditions of Corollary 1, for an arbitrary l ∈ R (L∗ ) + R (H ∗ ) and a realization of y( ⋅ ) , the representation (uˆ, y ) = (l, ϕˆ ) is true, where ϕ̂ is determined from the system
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
219
L∗ qˆ = H ∗ ( y − H ϕ̂ ),
(7)
L ϕ̂ = q̂ .
We now consider a posteriori estimators.
Proposition 2. Let G be a convex closed bounded subset of Y × F. An a posteriori minimax error in
the direction l is finite if and only if l ∈ dom c(X y , ⋅ ) ∩ (− 1) dom c(X y , ⋅ ) and
R (L∗ ) + R (H ∗ ) ⊂ dom c(X y , ⋅ ) ∩ (− 1) dom c(X y , ⋅ ) ⊂ R (L∗ ) + R (H ∗ ) .
(8)
For these l, the estimator and the error are as follows:
(l, ϕˆ ) = 1 (c(X y , l ) − c(X y , − l )) ,
(
)
1
dˆ (l ) =
c(X y , l ) + c(X y , − l ) .
2
2
Theorem 2. Suppose that
{( f , η) :
G =
f
2
+ η
2
}
≤1
and the operators L and H satisfy condition (i) or (ii) of Corollary 1. Then, for l ∈ R (L∗ ) + R (H ∗ ) 1,
and only for these l, an a posteriori minimax estimator ϕ̂ for a vector ϕ in the direction l exists and is
determined from system (7). The a posteriori error is determined by the relation
1/2
dˆ (l ) = (1 − ( y, y − Hϕˆ )) σˆ (l ) .
(9)
Corollary 4. Suppose that, under the conditions of Theorem 2, for an arbitrary direction l one has
Ò
l(ϕ ) = (l, ϕˆ ) , where ϕ̂ is determined from (7). Then the vector ϕ̂ is a minimax estimator for the vector
ϕ in the sense that
inf sup ϕ − x
ϕ ∈X y x ∈X
y
= sup ϕˆ − x
x ∈X y
= (1 − ( y, y − Hϕˆ ))
1/2
max σˆ (l ) .
l =1
Let us illustrate the application of Corollary 4. Without loss of generality (see the lemma on singular decomposition in [8]), we can assume that the matrices F and C are determined by a collection of blocks of consistent dimensions, i.e.,
⎛E
F = ⎜
⎝0
0⎞
⎟,
0⎠
⎛ C1
C = ⎜
⎝ C3
C2 ⎞
⎟.
C4 ⎠
Proposition 3. Suppose that t x (t ) ∈Rn is determined as a solution of the equation
d
F x (t ) − C x (t ) = f (t ),
dt
F x (t0 ) = 0,
220
S. M. Z HUK
and the set G has the form
T
⎧⎪
G = ⎨( f , η) : ∫ ( f (t )
t0
⎩⎪
2
+ η(t )
2
⎫
) dt ≤ 1⎪⎬ .
⎭⎪
Then the a posteriori minimax estimator for the function x( ⋅ ) based on the observations y ( t ) = x ( t ) + η ( t ),
t0 ≤ t ≤ T, is defined by the expression xˆ ( ⋅ ), where xˆ (t ) = [ x1(t ), x2 (t )] and the functions x1( ⋅ ) a n d
x2 ( ⋅ ) are determined from the equations
x˙1(t ) = (C1 − C2 (E + C4′ C4 )−1C4′ C3 ) x1(t ) + (C2 (E + C4′ C4 )−1C2′ + E ) q1(t )
+ C2 (E + C4′ C4 )−1 y2 (t ) ,
x1(t0 ) = 0,
q˙1(t ) = (− C1′ + C3′ C4 (E + C4′ C4 )−1C2′ ) q1(t ) + C3′ C4 (E + C4′ C4 )−1 y2 (t ) − y1(t )
+ (C3′ (E − C4 (E + C4′ C4 )−1C4′ ) C3 + E ) x1(t ) ,
q1(T ) = 0,
(10)
−1
−1
x2 (t ) = – (E + C4′ C4 ) C4′ C3 x1(t ) + (E + C4′ C4 ) (C2′ q1(t ) + y2 (t )) ,
q2 (t ) = – (E − C4 (E + C4′ C4 )−1C4′ ) C3 x1(t ) − C4 (E + C4′ C4 )−1(C2′ q1(t ) + y2 (t )) .
The minimax error has the form
sup x − xˆ
Xy
⎛
= ⎜1 −
⎜
⎝
T
∫
t0
⎞
( y, y − xˆ ) dt ⎟
⎟
⎠
1/2
⎛T
⎞
max ⎜ ∫ (l, p) dt ⎟
⎟
l =1 ⎜
⎝ t0
⎠
1/2
,
where the function p( ⋅ ) is determined from (10) if one sets y ( t ) = l ( t ).
The proposition remains true for a nonstationary matrix C (t ).
Auxiliary Results and Proof
We introduce the sets
Ul =
{u ∈Y : L∗z = l − H ∗u },
D =
{l ∈H : Ul
≠ ∅} ,
where, upon the identification of the Hilbert spaces H and F with their duals, the operator L∗ acts from F
into H. The unique existence of the adjoint operator L∗ is guaranteed by the fact that L is densely defined
[12, p. 40]. Recall that the indicator function δ (G, ⋅ ) of the set G is defined as follows: δ (G, f ) = 0 for
f ∈G , and δ (G, f ) = + ∞ for f ∉G .
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
221
The lemma below plays the key role in the proof of the theorem on the existence, uniqueness, and representation of minimax estimators.
Lemma 1. Let G be a convex bounded closed subset of F and let L be a linear densely defined
closed operator from H into F. Then
(L∗c)∗ = (δL ),
(L∗c)∗∗ = (δL )∗,
R (L∗ ) ⊂ dom (δL )∗ ⊂ R (L∗ ) .
If the interior of G has common points with R (L ) , then
(L∗c) is an eigenfunctional, L∗c = (L∗c)∗∗, and
dom (δL )∗ = dom (L∗c) = R (L∗ ) , the functional
(L∗c) ( x ) = c(G, z0 ) = inf { c(G, z ) L∗z = x } ,
x ∈ R (L∗ ) .
The lemma remains true if the indicator function of a convex set is replaced by a convex eigenfunction [9].
Remark 1. The condition int G ∩ R ( L ) ≠ ∅ of Lemma 1 is essential because there exist an operator L
and a set G such that R ( L ) ≠ R (L ) , int G = ∅, dom ( L∗ c ) = R ( L ) , dom ( δ L )∗ = R (L ) , and ( L∗ c ) ( x ) >
( δ L )∗ ( x ) for x ∈ R (L ) / R (L ) . Indeed, we set
⎡1
F = ⎢
⎣0
0⎤
,
0 ⎥⎦
− 1⎤
0 ⎥⎦
⎡1
C(t) ≡ ⎢
⎣1
and define an operator x L x ∈ (L2[0, 1])2 by the method described in the proof of Proposition 3. The
equation L x = 0 is equivalent to the system of algebraic–differential equations
x˙1(t ) − x1(t ) + x2 (t ) = 0,
x1(0 ) = 0,
– x1(t ) = 0,
which implies that x1,2 (t ) = 0 on [ 0, 1 ]. Therefore, N ( L ) = { 0 } because R (L∗ ) = (L2[0, 1])2 . On the
other hand, for the solvability of the algebraic–differential equation
– z˙1(t ) − z1(t ) − z2 (t ) = f1(t ),
z1(1) = 0,
z1(t ) = f2 (t ) ,
it is necessary that f2 ( ⋅ ) be absolutely continuous. Therefore, R (L∗ ) and R (L ) are not closed. We set
⎧⎪
G = ⎨ f = ( f1, f2 ) :
⎪⎩
1
∫ f1 (s) ds ≤ 1,
2
0
Then int G = ∅ in (L2[0, 1])2 . Since L p ∈ G ⇔ p1 = 0,
p2
⎫⎪
f2 = 0 ⎬ .
⎪⎭
≤ 1, we have
⎧⎪
(δL )∗ ( x ) = sup {( x, p) − δ(G, Lp)} = sup ⎨ ( p2 , x2 ),
⎪⎩
1
⎫⎪
∫ p2 (s) ds ≤ 1⎬⎪
2
0
⎭
=
x2 .
222
S. M. Z HUK
Thus,
dom ( δ L )∗ = R (L∗ ) = (L2[0, 1])2 .
On the other hand,
c(G, z ) = c( PS1(0 ), z ) = c(S1(0 ), P∗z ) =
S1(0 ) =
{ f ∈L22[0, 1] :
z1 ,
}
f ≤1 ,
⎛1
where P denotes the operator of multiplication by the matrix ⎜
⎝0
0⎞
2
⎟ in the space L2[0, 1]. By virtue of the
⎠
0
injectiveness of L∗ , we get
{
( L∗ c ) ( x ) = inf c(G, z ) : L∗z = x
}
=
x1,2 ∈ W21[0, 1].
x2 ,
If
xn = ( x1, x2,n ) → x = ( x1, x ∗ ) ,
x ∗ ∉ W21[0, 1],
then
( L∗ c ) ( xn) →
x∗
= ( δ L )∗ ( x ),
but ( L∗ c ) ( x ) = + ∞.
Proof of Lemma 1. Let p ∈( L ). Since p ∈( L ), the linear functional z p ( z ) = ( p, L ∗ z ) is
bounded. Therefore, it can be extended to the entire space F by continuity. Hence,
( L∗ c ) ∗ ( p ) =
=
sup
∗
x ∈R( L )
sup
{( p, x ) − inf {c(G, z) L∗z = x }}
sup
x ∈R( L∗ ) z ∈L∗ − 1 ( x )
{( p, x ) −
c(G, z )} =
sup
z ∈( L∗ )
{( p, L∗z ) − c(G, z)}
= sup {( Lp, z ) − c(G, z )} = c∗ (G, ⋅ )( Lp) = δ(G, Lp) .
z ∈F
Consider the case where p ∉( L ). By the definition of the operator adjoint to a bounded linear operator
[12, p. 39], the linear functional z p ( z ) = ( p, L ∗ z ) is unbounded. This means that there exists a sequence
{zn } such that
zn ≤ 1, zn ∈ ( L∗ ), and p(zn ) → + ∞ . On the other hand, the support function c(G, ⋅ ) of a
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
223
bounded convex set is bounded in the neighborhood of an arbitrary point z ∈ F, and, hence, it is continuous
[13, p. 21]. Then sup n c(G, zn ) = M < + ∞ and
( L∗ c )∗ ( p ) =
sup
∗
z ∈( L )
{( p, L∗z ) − c(G, z)}
≥ sup { p(zn ) − M } = + ∞.
n
On the other hand, by definition, we have ( δ L ) ( p ) = + ∞. Thus, we have shown that ( L∗ c )∗ ( p ) = ( δ L ) ( p )
for all p, whence ( L∗ c )∗∗ = ( δ L )∗ .
Let x ∉ N (L )⊥ and let Lp ∈G for a certain p ∈( L ). There exists p0 ∈ N (L ) such that n( p0 , x ) > 0,
n ∈N. In this case, we have
( δ L )∗ ( x ) =
sup
q ∈( L )
{(q, x ) −
δ(G, Lq)} ≥ sup { n( p0 , x )} = + ∞.
n ∈N
Therefore,
dom ( δ L )∗ ⊂ N (L )⊥ = R (L∗ ) .
On the other hand, if x = L∗ z, then
( δ L )∗ ( x ) =
sup
q ∈( L )
{( Lq, z ) −
δ(G, Lq )} ≤ sup {( f , z ) − δ(G, f )} = c(G, x ) < + ∞
f ∈F
because G is bounded. Therefore,
R ( L∗ ) ⊂ dom ( δ L )∗ ⊂ R (L∗ ) .
Now assume that int G ∩ R ( L ) ≠ ∅. Let us show that this is sufficient for the validity of the inequality
( L∗ c ) ≤ ( δ L )∗ .
Indeed,
x∗ ∈ dom ( δ L )∗ ,
x ∈ ( L ) ⇒ ( x∗, x ) – ( δ L )∗ ( x∗ ) ≤ δ L ( x ) < + ∞
by virtue of the Young–Fenchel inequality [14]. We fix x∗ ∈ dom ( δ L )∗ and introduce the set
M ( x∗ ) = { ( z, μ ) | Lx = z, μ = ( x∗, x ) – ( δ L )∗ ( x∗ ) }.
Note that
{
}
W : = int epi (δ(G, ⋅ )) = int G × μ ∈ R1 : μ > 0 ∩ M ( x ∗ ) = ∅.
224
S. M. Z HUK
Indeed, if ( z, μ ) ∈ W ∩ M, then
δ ( G, Lx ) < μ = ( x∗, x ) – ( δ L )∗ ( x∗ ) ,
Lx = z ,
Thus, the convex sets epi (δ(G, ⋅ )) and M( x ∗ ) can be separated by a nonzero linear continuous functional (z 0 , β0 ) :
{
sup (z 0 , z ) + β0 α ( z, α ) ∈W
}
{
}
≤ inf (z 0 , z ) + β0 α ( z, α ) ∈M ( x ∗ ) .
(11)
It is easy to verify that β 0 < 0. Indeed, if β 0 > 0, then the supremum in (11) is equal to + ∞. On the other
hand, the supremum in (11) is never equal to – ∞, which guarantees that the infimum in (11) is finite. If β0 = 0,
then, according to (11), G and R (L ) are separated by the functional (z 0 , ⋅ ) , but then int G ∩ R ( L ) = ∅.
By the definition of M( x ∗ ) , we have
{
– ∞ < ( G, z 0 ) = sup (z 0 , z ) − β0 δ(G, z )
}
{
}
≤ inf (z 0 , Lx ) − β0 ( x ∗, x ) + β0 (δ L )∗( x∗ ) ,
whence
{
x
– ∞ < inf (z 0 , Lx ) − β0 ( x ∗, x )
}
⇒
[− β0 x∗, z0 ] ⊥ {[ x, Lx], x ∈ (L)} .
Taking into account the form of the orthogonal complement of the graph of L [12, p. 40], we obtain
z0 ∈ ( L∗ ),
L∗ z0 = β0 x ∗
−1
( L∗ c ) ( x∗ ) ≤ c (G, β0 z 0 ) ≤ ( δ L )∗ ( x∗ ).
⇒
We have shown that, on dom ( δ L )∗ , one has ( L∗ c ) = ( δ L )∗ and dom ( δ L )∗ ⊂ R ( L∗ ). By definition,
R ( L∗ ) ⊂ dom ( L∗ c ). We have proved earlier that R ( L∗ ) ⊂ dom ( δ L )∗ . Generally speaking, we have ( L∗ c ) ≥
( L∗ c )∗∗ = ( δ L )∗ . Therefore, dom ( δ L )∗ ⊂ dom ( L∗ c ). Thus,
( L∗ c ) = ( δ L )∗ ,
dom ( δ L )∗ = dom ( L∗ c ) = R ( L∗ ) .
According to the Fenchel–Moreau theorem, ( L∗ c ) = ( L∗ c )∗∗ = ( δ L )∗ if and only if ( L∗ c ) has a closed supergraph, which, for convex eigenfunctionals, is equivalent to the lower semicontinuity [14, p. 178].
The lemma is proved.
2
Proof of Proposition 1. Taking the equality M ξ 2 = M ( ξ – M ξ ) + ( M ξ )
count, we get
[
M ((l, ϕ) − (u, y) − c) = (l − H ∗u, ϕ) − c
2
]2 + M (u, η)2 .
2
and relation (1) into ac-
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
sup
M ((l, ϕ) − (u, y) − c) =
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
225
Hence,
ϕ ∈L−1( ), Rη ∈R
2
2
(l − H ∗u, ϕ) − c] + sup ( Rη u, u) .
[
R ∈R
( )
sup
ϕ ∈L−1
η
We transform the first term as follows:
(l − H ∗u, ϕ) − c]
[
( )
sup
−1
ϕ ∈L
=
(
)
(
)
1
1
(δ L )∗ (l − H ∗u) + (δ L )∗ (− l + H ∗u) + c − (δ L )∗ (l − H ∗u) − (δ L )∗ (− l + H ∗u) .
2
2
(12)
Using relation (12), for given l, u, and c we get
(l − H ∗u, ϕ) − c]
[
( )
< +∞
sup
ϕ ∈L−1
l − H ∗u ∈ dom ( δ L )∗ ∩ – dom (δ L )∗ .
⇔
The set dom ( δ L )∗ is a convex cone with vertex at zero. Therefore, dom ( δ L )∗ ∩ – dom ( δ L )∗ is the maximum linear manifold contained in dom ( δ L )∗ . Setting
c =
(
)
1
(δ L )∗ (l − H ∗u) − (δ L )∗ (− l + H ∗u)
2
and using relation (12) and Lemma 1, we obtain the expression for σ ( l, u ).
The expression sup Rη ∈R ( Rη u, u) is finite for an arbitrary u. Indeed,
( Rη u, u) ≤
Rη
u
2
≤
2
u ,
Rη ∈ .
Therefore, σ ( l, u ) < + ∞. To complete the proof, it remains to use the definition of mean-square minimax estimator.
Proof of Theorem 1. According to Proposition 1, for given l ∈H a minimax estimator is finite if and
only if
l − H ∗u ∈ dom ( δ L )∗ ∩ – dom (δ L )∗ .
Since 0 ∈ ∩ R( L ), the conditions of Lemma 1 are satisfied. Therefore, dom ( δ L )∗ = R ( L∗ ) and
I1 (u) : = cl (L∗c)(l − H ∗u) = (L∗c)(l − H ∗u) .
1/2
Using Proposition 1, we establish the statement of the theorem concerning the finiteness of the minimax error.
We get
226
S. M. Z HUK
( Rη u, u) = M (η, u)2 ≤ M (η, η)(u, u)
⇒
sup ( Rη u, u) = (u, u) .
Rη ∈R
Using (3), we obtain
σ ( l, u ) = I1 ( u ) + ( u, u ),
{u : l − H ∗u ∈ R(L∗ )}
and, hence, relation (4) is true. Note that Ul =
by virtue of the definition of U l . The
functional I1 is convex and weakly lower semicontinuous, which follows from Lemma 1. Hence, u σ ( l, u )
is weakly semicontinuous, strictly convex, and coercive. Since I1 ( u ) = + ∞ in the complement of U l , for an
arbitrary minimizing sequence {un } , we have un ∈Ul . This sequence is bounded by virtue of the coercivity of
u σ ( l, u ). We separate a weakly convergent subsequence {un } . By virtue of weak semicontinuity, the
greatest lower bound of u σ ( l, u ) is attained at the weak limit of the sequence {un } . Thus, the set of points
of minimum is nonempty, and, by virtue of strict convexity, it consists of a single point û . Since I1 ( u ) = + ∞
for u ∉Ul , we have l − H ∗uˆ ∈ R( L∗ ). Thus, we have proved the existence and uniqueness of a minimax estimator.
Assume that the condition of the second part of the theorem is satisfied. Then
{u : PN (L) H ∗u = PN (L) l } ,
Ul =
where PN ( L ) denotes the orthoprojector to N (L ). Consider the functional
{
}
I 2 (w) = min c2 (, z ), L∗z = PL∗ (l − w) .
z
According to Lemma 1, I 2 ( ⋅ ) attains its minimum zˆ(w) at every point w. Then, by virtue of properties of a
support function, we have
1/2
I 2 (w) = c (, zˆ(w)) ≤ c (, z (w)) + c2 (, z0 ) ,
where L∗z0 = 0 and L∗z (w) = PL∗ (l − w), z (w) ∈ R( L ). Since the left-hand side of this inequality does not
depend on z0 , we get
1/2
I 2 (w) ≤ c (, z (w)) +
min c (, z0 ) = c (, z (w))
z0 ∈N ( L∗ )
because c (, ⋅ ) ≥ 0 and c (, 0 ) = 0. For an arbitrary w, the boundedness of I 2 ( ⋅ ) in a certain neighborhood V (w) now follows from the statement that z (w) depends continuously on w (L is normally solvable)
and from properties of the set . Thus, I 2 ( ⋅ ) is a continuous function. According to the theorem on the subdifferential of the image of a convex function under a linear operator [14, p. 212], we get
∂I 3 (uˆ ) = H ∂I 2 (H ∗uˆ ) ,
I 3 (u ) = I 2 (H ∗u ) .
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
227
On the other hand, on Ul we have
PL∗ (l − H ∗u ) = l − H ∗u
⇒
I1(u ) = I 2 (H ∗u ) = I 3 (u ).
Therefore, the point of minimum û of the functional σ(l, ⋅ ) is simultaneously a solution of the problem of
conditional optimization
I 4 (u ) = (u, u ) + I 3 (u )
→
u ∈Ul .
min,
{
}
Since the affine manifold Ul is parallel to the linear subspace U0 = u : PN ( L ) H ∗u = 0 , a necessary and sufficient condition for an extremum of I 4 on Ul has the form [14, p. 89]
∂I 4 (uˆ ) ∩ (U0 )⊥ ≠ ∅.
According to the Moreau–Rockafellar theorem, we have ∂I 4 (uˆ ) = ∂I 3 (uˆ ) + {2uˆ} . On the other hand, by virtue
of the conditions of the theorem, we get
(U0 )⊥ = N ⊥ (PN ( L ) H ∗ ) = R (PN ( L ) H ∗ )∗ = H ( N (L )) .
Thus, there exists p0 : Lp0 = 0 such that
uˆ − Hp0 ∈ H ∂I 2 (H ∗uˆ ) ,
{
}
I 2 (w) = min c2 (, z ), L∗z = PL∗ (l − w) ,
z
û ∈Ul .
The theorem is proved.
Proof of Corollary 1. Note that the sets and {η: M ( η, η) ≤ 1} satisfy the conditions of Theorem 1.
Therefore, there exists a unique mean-square minimax estimator û ∈Ul . Assume that condition (i) is satisfied.
Then, according to Theorem 1, we get
ˆ ∈ H (∂I (uˆ )),
uˆ − Hp
2
û ∈Ul ,
{
}
I 2 (w) = min c2 (, z ), L∗z = PL∗ (l − w) .
z
∗
Let us determine the subdifferential I 2 . We introduce additional notation. Let L̃1 denote the linear operator
defined on R( L∗ ) according to the rule
∗
L̃1 w = z,
z ∈ R( L ) ∩ ( L∗ ),
L∗ z = w.
∗
Let us show that L̃1 is a closed operator. Indeed, let
wn → w,
wn ∈ R( L∗ ) ,
∗
L̃1 wn = zn → z.
228
S. M. Z HUK
∗
Then w ∈ R( L∗ ) and L∗ zn = wn → w, zn → z. Since zn ∈ R( L ) by the definition of L̃1 , we have z ∈ R( L ).
∗
Taking into account that L∗ is closed, we establish that z ∈ R( L ) ∩ ( L∗ ) and L∗ z = w, i.e., w ∈ (L˜ )
1
and
∗
L̃1 w
= z.
∗
∗
By virtue of the closed-graph theorem [12], L̃1 is bounded. We extend L̃1 to the entire F so that
∗
∗
L1 w = L˜1 (I − PN ( L ) ) w ,
w ∈F .
∗
We associate the operator L1 with an operator L by analogy with the construction of L1 and establish that
(L1)∗ = L1∗ . Indeed, for arbitrary p ∈F and w ∈H , we have
(L1∗ w, p) + (L1p, w) = ( z, p) + (q, w) = (z, Lq) + (q, L∗ z ) = 0,
where z ∈ R( L ) ∩ ( L∗ ), L∗ z = w, q ∈ R( L∗ ) ∩ ( L ) , and Lq = p.
∗
Note that c (, z ) = z . Therefore, for l − w ∈ R( L∗ ), by the definition of L1 we get
1/2
I 2 (w) =
∗
{
}
L1 (l − w) = min c(, z ), L∗z = PL∗ (l − w) .
z
Setting
2
∗
L1 l − q ,
k (q) =
we obtain
I 2 (w) =
∗
L1 (l − w)
2
∗
∗
= k (L1 w) = (kL1 ) (w) .
Note that q k (q) is a convex continuous functional on the entire space F. Therefore, it satisfies the conditions of the theorem on the subdifferential of the image of a convex functional under a linear continuous operator
[14, p. 212], whence
∗
∗
2
∗
∂ (kL1 ) (w) = L1∂k (L1 w) = L1∂ L1 (l − w) .
∗
We set w = H ∗uˆ . Since û ∈Ul , we have L∗zˆ = l − H ∗uˆ , where ẑ = L1 (l − H ∗uˆ ). Thus,
∗
∂I 2 (H ∗uˆ ) = ∂ (kL1 )(H ∗uˆ ) = L1∂ zˆ
where 1( z ) = { f ∈ : ( f , z ) = z
If ẑ = 0, then
2
= 2 zˆ L1( 1(zˆ )) ,
}.
∗
∗
0 = L1 (l − H ∗uˆ ) = L˜1 (l − H ∗uˆ ).
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
229
∗
By virtue of the injectivity of L̃1 , we get l = H ∗uˆ . Condition (5) takes the form Hp0 = û , Lp0 = 0, and,
hence, 0 = l − H ∗Hp0 and Lp0 = 0. Thus, û is expressed in terms of solutions of (6).
Let ẑ ≠ 0. By the definition of the operator L1, we have
⎫
⎧
zˆ
HL1( 1(zˆ )) = ⎨ Hp, Lp =
, p ∈ R( L∗ ) ⎬ .
ˆ
z
⎭
⎩
Thus, condition (5) takes the form
û − Hp0 = 2 ẑ Hp,
L∗zˆ = l − H ∗uˆ ,
Lp =
for a certain p0 ∈ N ( L ). We set p̃ = 2 ẑ
−1
zˆ
,
zˆ
zˆ ∈ R( L ) ,
(13)
p ∈ R( L∗ ),
p . Using (13), we get û = H ( p˜ + p0 ) , where
L ( p˜ + p0 ) = ẑ ,
Lp0 = 0,
L∗zˆ = l − H ∗H ( p˜ + p0 ) .
If we now set p̂ = p̃ + p0 for p̃ and p0 , then p̂ and ẑ satisfy (6). Consequently, û = Hp̂.
We now show that p̂ can be taken as an arbitrary solution of (6). Indeed, we introduce the linear operator
Tx = [ Lx, Hx ] from H into the Cartesian product F × Y. It is clear that N ( T ) = N ( L ) ∩ N ( H ) and T ∗ (u, z ) =
L∗z + H ∗u . Let ( p0 , z0 ) be determined from the conditions
Lp0 = z0 ,
(14)
∗
∗
L z0 + H Hp0 = 0.
We set u0 = Hp0 . Then T ∗ (u0 , z0 ) = 0 and Tp0 = [ z0 , u0 ] . Therefore, Tp0 ∈ N (T ∗ ) . However,
R(T ) ∩ N (T ∗ ) = {0} ,
whence p0 ∈ N (T ) = N ( L ) ∩ N ( H ) , i.e., u 0 = 0. It remains to note that any two solutions ( p1, z1 ) and
( p2 , z2 ) of the linear equation (6) differ by a solution of (14). Therefore, according to the result proved above,
we have H ( p1 − p2 ) = 0.
Now assume that condition (ii) is satisfied. Then the unique solution [u∗, z∗ ] of the problem of optimization
[ z, u ]
2
→
inf,
T ∗[ z, u ] = l
(15)
230
S. M. Z HUK
is orthogonal to the null manifold T ∗, and, hence, it belongs to the range of values of the operator T, i.e., we
simultaneously have
T ∗[u∗, z∗ ] = l.
[u∗, z∗ ] = Tx ,
By the definition of T, this yields
Lx = z∗ ,
L∗z∗ + H ∗u∗ = l.
Hx = u∗ ,
This, in turn, yields
u∗ ∈Ul
⇒
σ(uˆ, l ) ≤ σ(u∗, l ) .
On the other hand, l = L∗zˆ + H ∗uˆ and L p = ẑ by virtue of relation (6) for a certain p ∈( L ). Thus,
T ∗[uˆ, zˆ ] = l . Therefore, by virtue of (15), we get
σ(l, uˆ ) =
[uˆ, zˆ ]
2
≥
2
[u∗, z∗ ] .
On the other hand, according to (15), we have
σ(u∗, l ) = (u∗, u∗ ) + min
z
{ z 2, L∗z = l − H ∗u∗ }
≤ (u∗, u∗ ) + ( z∗, z∗ ) ≤ σ(uˆ, l ).
Therefore, σ(l, uˆ ) = σ(l, u∗ ) , which, by virtue of strict convexity, yields u∗ = uˆ .
Taking (6) into account, we conclude that σ(l, u ) = (zˆ, zˆ ) + (uˆ, uˆ ) = (l, p), whence σˆ (l ) = (l, pˆ )1/2 .
Corollary 1 is proved.
Proof of Corollary 2. If the system of operator equations (6) has a solution
l = L∗z + H ∗u for ẑ and û , uˆ = Hpˆ .
zˆ ∈ ( L∗ ) , pˆ ∈( L ) , then
Now assume that the conditions of the corollary are satisfied and l ∈ R( L∗ ) + R( H ∗ ). Then the operators L
and H and the vector l satisfy the conditions of Corollary 1. Therefore, the minimax estimator û can be represented in the form uˆ = Hpˆ , where p̂ is determined as a solution of (6).
Corollary 2 is proved.
Proof of Corollary 3. First of all, note that system (7) has the nonempty set of solutions (q, ϕ ) . This follows from Corollary 2 and the statement according to which, for an arbitrary y ∈Y , the vector H ∗ y belongs to
the set R( L∗ ) + R( H ∗ ) . Now let uˆ = Hpˆ , where p̂ is determined as a solution of (6) and ϕ̂ is determined as
a solution of (7). By direct calculation, one can easily establish that (uˆ, y ) = (l, ϕˆ ) .
Corollary 3 is proved.
Proof of Proposition 2. We write
– c(X y , − l ) ≤ (l, ψ ) ≤ c(X y , l ) ,
ψ ∈X y ,
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
231
whence
(l , ψ ) ≤
(
)
1
c(X y , l ) + c(X y , − l ) ,
2
ψ ∈X y .
Therefore,
sup (l, ϕ ) − (l, ψ ) =
ψ ∈X y
(
)
(
)
1
1
c(X y , l ) + c(X y , − l ) + (l, ϕ ) − c(X y , l ) − c(X y , − l ) .
2
2
(16)
Expression (16) is meaningful only if l ∈ dom c(X y , ⋅ ) ∩ (− 1) dom c(X y , ⋅ ) . We show that
R( L∗ ) + R( H ∗ ) ⊂ dom c(X y , ⋅ ) ⊂ R( L∗ ) + R( H ∗ ) .
The first inclusion is a corollary of the statement that, for an arbitrary l = L∗z + H ∗u , one has
c(X y , l ) = sup {( Lx, z ) − (u, y − Hx )} + (u, y) ≤ c(G, [ z, u]) + (u, y) < + ∞
x ∈X y
by virtue of the boundedness of G.
On the other hand, we have
c(X y , l ) ≥ sup {(l, x ), Lx = 0, Hx = 0 } = + ∞
for every l ∉ R( L∗ ) + R( H ∗ ). Thus, expression (16) is meaningful only if condition (8) is satisfied. In what follows, we assume that this condition is satisfied. It follows from (16) that
sup (l, ϕ ) − (l, ψ ) ≥
ψ ∈X y
(
)
1
c(X y , l ) + c(X y , − l )
2
for any ϕ ∈X y , and the equality is realized for
(
)
1
c(X y , l ) − c(X y , − l ) ,
(l, ϕˆ ) =
2
ϕ̂ ∈X y
by virtue of the convexity of G and the continuity of the scalar product.
Proposition 2 is proved.
Proof of Theorem 2. Assume that the operators L and H satisfy the conditions of the theorem. Then
the projection problem
( x ) = ( Lx, Lx ) + ( y − Hx, y − Hx )
→
min
x ∈( L )
(17)
232
S. M. Z HUK
has a solution ϕ̂ . Indeed [15, p. 23], for an arbitrary y ∈Y , the set of solutions of (17) is, at the same time, a
collection of solutions of the variational equality
– ( L ϕ, Lx ) + ( y − Hϕ, Hx ) = 0,
x ∈( L ),
(18)
which contains, in particular, the ϕ̂ -solution of the consistent system (see Corollary 2)
L∗qˆ = H ∗( y − H ϕˆ ) ,
L ϕ̂ = q̂ .
We set
X0 =
{ x : 1( x ) +
(ϕˆ ) ≤ 1} ,
1( x ) = ( Lx, Lx ) + ( Hx, Hx ) .
Note that
(ϕˆ − x ) = 1( x ) + (ϕˆ ) − 2 ( L ϕˆ , Lx ) + 2 ( y − H ϕˆ , Hx ) = 1( x ) + (ϕˆ )
for x ∈( L ) by virtue of equality (18).
Let x ∈X0 . Then (ϕˆ − x ) = 1( x ) + (ϕˆ ) ≤ 1 and, hence,
ϕˆ + (−1) X0 = ϕ̂ + X0 ⊂ X y .
Conversely, if x ∈X y , then x̃ : = ϕ̂ − x ∈ ( L ) and
1 ≥ ( x ) = (ϕˆ − x˜ ) = 1(ϕˆ − x ) + (ϕˆ ) .
Therefore,
– x + ϕ̂ ∈X0 ,
x ∈X y
⇒
X y ⊂ ϕ̂ + X0 .
Thus,
c(X y , l ) = (l, ϕˆ ) + c(X0 , l ) .
Note that
{
c(X0 , l ) = sup (l, x ) − δ(Sβ0 , Tx )
x
}
{
}
= inf c(Sβ0 , [ z, u]) L∗z + H ∗u = l ,
where Tx = [ Lx, Hx ], δ(Sβ0 , ⋅ ) is the indicator function of the ball
Sβ0 =
{[ p, q] : s( p, q ) ≤ β} ,
s( p, q ) = ( p, p) + (q, q ) ,
β = 1 − (ϕˆ ) ≥ 0.
(19)
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
233
Indeed, by definition, we have
x ∈X0
⇔
s(Tx ) ≤ β
⇔
δ(Sβ0 , Tx ) ≤ 0.
Thus, the convex functional x δT ( x ) = δ(Sβ0 , Tx ) is the indicator function of X0 . Since the linear operator
T and the set Sβ0 satisfy the conditions of Lemma 1, we get
c(X0 , ⋅ ) = (δT )∗( ⋅ ) = T ∗ c(Sβ0 , ⋅ ) ,
where c(Sβ0 , w ) = (w, w)1/2 β1/2 by virtue of the Schwarz inequality. Thus, according to (19), we have
[ {z
c(X y , l ) = (l, ϕˆ ) + β1/2 inf
2
+ u , L∗z + H ∗u = l
2
}]
1/2
for an arbitrary l ∈H. However, inf { ⋅} on the right-hand side of the last equality is nothing but an a priori
minimax error [see the arguments presented in the solution of the problem of optimization (15)]. Therefore,
c(X y , l ) = (l, ϕˆ ) + β1/2 σˆ (l ) .
To complete the proof it remains to note that [see relation (9)]
β = 1 − (ϕˆ ) = 1 − ( y, y − Hϕˆ )
and to use Proposition 2.
Proof of Corollary 4. According to Theorem 2, we get
inf sup ϕ − x
ϕ ∈X y x ∈X
y
=
inf sup sup (l, ϕˆ − x )
ϕ ∈X y x ∈X l =1
y
≥ sup inf sup ϕ − x
l =1 ϕ ∈X y x ∈X y
= sup dˆ (l ) = (1 − ( y, y − Hϕˆ ))1/2 max σˆ (l ).
l =1
l =1
It is clear that
sup ϕˆ − x
x ∈X y
≥
inf sup ϕ − x .
ϕ ∈X y x ∈X
y
By virtue of the conditions of the corollary, this yields
sup ϕˆ − x
x ∈X y
Corollary 4 is proved.
= sup sup (l, ϕˆ − x ) = sup dˆ (l ) ≥
l =1 x ∈X y
l =1
inf sup ϕ − x .
ϕ ∈X y x ∈X
y
234
S. M. Z HUK
Proof of Proposition 3. Let D denote the operator generated by the linear descriptor equation with matrices F and C. In this case, the operator H acts as follows: Hx = x. Then the operators D and H and the set
satisfy condition (ii). In this case, according to Theorem 2, the a posteriori minimax estimator x̂ for the solution x of the descriptor equation in the direction l exists for any l ∈ R( L∗ ) + R( H ∗ ) = Ln2 (t 0 , T ) and is
determined from the operator equations
L∗ qˆ = H ∗( y − Hxˆ ),
(20)
L x̂ = q̂ .
The a posteriori error is determined by the relation
1/2
dˆ (l ) = (1 − ( y, y − Hxˆ )) (l, pˆ )1/2 .
Note that Eqs. (20) are equivalent to the system of algebraic–differential equations
x˙1(t ) = C1 x1(t ) + C2 x2 (t ) + q1(t ) ,
x1(t0 ) = 0,
0 = C3 x1(t ) + C4 x2 (t ) + q2 (t ) ,
(21)
q˙1(t ) = – C1′q1(t ) − C3′ q2 (t ) + x1(t ) − y1(t ),
q1(T ) = 0,
0 = – C2′ q1(t ) − C4′ q2 (t ) + x2 (t ) − y2 (t ).
Indeed, taking into account the block structure of the matrices F and C, we can write x =
(z1, z2 ) . Then
⎛ x1(t ) ⎞
F x (t ) = ⎜
⎟,
⎝ 0 ⎠
⎛ z1(t ) ⎞
F ′z (t ) = ⎜
⎟,
⎝ 0 ⎠
( x1, x2 ) and z =
⎛ C1 x1(t ) + C2 x2 (t ) ⎞
C x (t ) = ⎜
⎟.
⎝ C3 x1(t ) + C4 x2 (t ) ⎠
Taking into account the form of L and L∗ , we establish the equivalence of (20) and (21).
We rewrite the algebraic equations (21) in the form
⎛ C4
⎜
⎝ E
E ⎞ ⎛ x2 (t ) ⎞
⎛ − C3 x1(t ) ⎞
⎟⎜
⎟ = ⎜ ′
⎟.
′
⎝ C2 q1(t ) + y2 (t ) ⎠
− C4 ⎠ ⎝ q2 (t ) ⎠
Multiplying this equality from the left by
⎛
(E + C4′C4 )−1C4′
⎜
⎝ E − C4 (E + C4′ C4 )−1C4′
(E + C4′C4 )−1 ⎞
⎟,
− C4 (E + C4′ C4 )−1 ⎠
we obtain the representations of x2 and q2 given in the proposition. The expressions for x1 and q 1 are established by substituting the obtained relations into (21).
S TATE ESTIMATION
FOR A
DYNAMICAL SYSTEM DESCRIBED
BY A
LINEAR EQUATION WITH UNKNOWN P ARAMETERS
235
REFERENCES
1. O. H. Nakonechnyi, “Estimation of parameters under conditions of indeterminacy,” Nauk Zap. Kyiv. Nats. Univ., 7, 102–111
(2004).
2. N. N. Krasovskii, Theory of Control of Motion [in Russian], Nauka, Moscow (1968).
3. A. B. Kurzhanskii, Control and Observation under Conditions of Indeterminacy [in Russian], Nauka, Moscow (1977).
4. O. H. Nakonechnyi, Optimal Control and Estimation in Partial Differential Equations. A Handbook [in Ukrainian], Kyiv University, Kyiv (2004).
5. Yu. Podlipenko, “Minimax estimation of the right-hand sides of Noetherian equations in a Hilbert space under conditions of indeterminacy,” Dopov. Nats. Akad. Nauk Ukr., No. 12, 36–44 (2005).
6. O. A. Boichuk and L. M. Shehda, “Degenerate Noetherian boundary-value problems,” Nelin. Kolyvannya, 10, No. 3, 303–312
(2007).
7. A. M. Samoilenko, M. I. Shkil’, and V. P. Yakovets’, Linear Systems of Differential Equations with Degeneration [in Ukrainian], Vyshcha Shkola, Kyiv (2000).
8. S. M. Zhuk, “Closedness and normal solvability of an operator generated by a degenerate linear differential equation with variable coefficients,” Nelin. Kolyvannya, 10, No. 4, 464–480 (2007).
9. S. M. Zhuk, “Minimax problems of observation for linear descriptor differential equations,” Zh. Prikl. Mat., 2, 39–46 (2005).
10. S. M. Zhuk, Problems of Minimax Observation for Linear Descriptor Systems [in Ukrainian], Author’s Abstract of the Candidate-Degree Thesis (Physics and Mathematics), Kyiv (2006).
11. S. M. Zhuk, S. Demidenko, and O. H. Nakonechnyi, “On the problem of minimax estimation of solutions of one-dimensional
boundary-value problems,” Tavr. Visn. Inform. Mat., 1, 7–24 (2007).
12. V. É. Lyantse and O. G. Storozh, Methods of the Theory of Unbounded Operators [in Russian], Naukova Dumka, Kiev (1983).
13. I. Ekeland and R. Temam, Convex Analysis and Variational Problems [Russian translation], Mir, Moscow (1979).
14. A. D. Ioffe and V. M. Tikhomirov, Theory of Extremal Problems [in Russian], Nauka, Moscow (1974).
15. A. V. Balakrishnan, Applied Functional Analysis [Russian translation], Nauka, Moscow (1980).
```