1 Introduction

Transcription

1 Introduction
Regularization and iterative approximation for
linear ill-posed problems in the space of
functions of bounded variation
V. V. Vasin
y
Abstract
For stable approximation of a nonsmooth (discontinuous) solution
of a linear operator equation of rst kind, a two-stage regularizing algorithm is proposed. On the rst stage, Tikhonov's regularization is carried out, in which the total variation together with the norm of Lp (D),
D Rm, (¯®áâ ¢¨« §­ ª \" ¢¬¥áâ® \2") is used as the stabilizing
functional. This allows us to establish the strong convergence of the
regularized solutions in Lp (D) and convergence of their approximations
without any restrictions on the dimension m. On the second stage, a
subgradient method with iterations in a more smooth space W21 (D) is
applied for solving the regularized equation. Moreover, a theorem on
convergence of discrete approximations for the regularized problem is
formulated and proved.
1 Introduction
In stable reconstruction of nonsmooth (in particular, discontinuous) solutions
of ill-posed problems considered in the form of an operator equation of rst
kind
Au = f;
(1.1)
the fundamental question is the choice of the stabilizing functional in variational methods of regularization. The requirements to the stabilizer are in
a certain sense conicting. On the one hand, this functional should have a
rather strong stabilizing eect to guarantee the convergence in an appropriate
topology, and on the other hand, it should not smooth out the solution too
much not to lose the ne structure.
In the case where the solution space is a space of functions of one variable,
in many papers stabilizers of the form (u) = kuk + V [u] were used for
Institute of Mathematics and Mechanics, Ural Branch of Russian Acad. Sci., (㫨æ ,
, Ekaterinburg (¯®çâ®¢ë© ¨­¤¥ªá ??), Russia; vasin@imm.uran.ru
y Supported by the RFBR grant No. 00-01-00325.
¤®¬ ??)
1
this purpose in the Tikhonov method; here + > 0, V [u] is the variation
of a function u on a segment, and kuk is a norm, as a rule, the Lp-norm
(p 1). In this way one can obtain the piecewise-uniform convergence of the
regularized solutions to the normal solution of equation (1.1) (see, e.g., [3] and
the references therein).
In the series of papers [11, 12] the results of piecewise-uniform approximation were generalized to the two-dimensional and multidimensional cases. In
these works, the variation on an m-dimensional parallelepiped D was taken as
the second term of the stabilizer (u), this variation being dened by analogy
with functions of one variable, and kuk = kukL . The case m 3 was studied
in [1] with the use of the stabilizer
(u) = kukL + J (u);
where
2
2
1
1
J (u) = sup
nZ ;
D
p
u(x) div v(x) + (1 ; jv(x)j ) dx 2
o
v 2 C (D; Rm); jv(x)j 1 :
In the case = 0, the functional J (u) can be written in the form
nZ
o
J (u) = sup
u(x) div v(x) dx v 2 C (D; Rm); jv(x)j 1 : (1.2)
1
0
1
0
D
The functional J (u) is called the total variation of a function u (see [7]).
The strong convergence of the Tikhonov-regularized solutions in the space Lp,
1 p < m=(m ; 1), and weak convergence in Lp for p = m=(m ; 1) were
proved.
In this work, for an arbitrary dimension m and arbitrary domain D with a
piecewise-smooth boundary, we use the stabilizing functional
(u) = kukpLp D + J (u) (p > 1)
(1.3)
and prove the strong convergence of the regularized solutions in the space
Lp(D) for every p > 1 together with convergence with respect to the functional
J (u). In addition, the proof of stability (convergence) of nite-dimensional
approximations for the Tikhonov regularizer is given. Finally, we establish the
convergence theorem in Lp(D), p 2, for the whole class of iterative methods
of subgradient (¨«¨ \gradient" | ¢ àãá᪮¬ ¨ ­£«¨©áª®¬ ¯®-à §­®¬ã ??)
type for the problem of minimization of the regularizing functional with the
nonsmooth (nondierentiable) stabilizer (1.3). We note that the problem of
justifying the algorithms of nonsmooth optimization in the regularized problem
was not considered in the above-mentioned works.
The stabilizer (1.3) with p = 2 was considered earlier in [5]. However, in
that paper, just as in [1], the strong convergence of the regularized solutions
was established only for p < m=(m ; 1). In the general situation, the strong
convergence in L and error estimate are proved under additional constraints
on the smoothness of solution.
In the sequel we set = = 1 for simplicity and, as a rule, we omit the
subscript Lp(D) in the notation of norm.
(
)
2
2
2 Convergence of regularized solutions
Let A be a linear bounded operator acting from Lp(D) into Lq (S ), where
1 < p; q < 1, D Rm, and S Rk. (¯®câ ¢¨« \" ¢¬¥áâ® \2") We
do not assume that the operator A is invertible or the operator inverse to A
is continuous, consequently, equation (1.1) belongs to the class of essentially
ill-posed problems. Let equation (1.1) be solvable in the space U = fu 2
Lp(D) j J (u) < 1g for the exact data A and f , which are given by their
approximations Ah and f with the approximation conditions
kf ; f k ;
kA ; Ahk h:
(2.1)
Note that the space U equipped with the norm kuk = kukLp + J (u) is a
Banach space. This fact is established analogously to the case p = 1 (see
[7, Remark 1.12]).
Consider the Tikhonov regularization method of the form
min kAhu ; f kq + (u ; u ) j u 2 U ;
0
(2.2)
where the stabilizing functional (u) is dened by formula (1.3) with J (u)
dened by (1.2).
Theorem 2.1. Let A and Ah be linear bounded operators. Then for every
pair Ah; f satisfying condition (2.1) and for every > 0 and u 2 U the
problem (2.2) has a unique solution u and if the relationship between the
parameters is such that (; h) ! 0 and ( + h)q =(; h) ! 0, then we have
the convergence
0
;h ; u^k = 0;
lim
k
u
Lp
;h!
(
)
0
;h ; u ) = J (^
lim
J
(
u
u ; u );
;h!
(
)
0
0
0
where u^ is the normal solution of equation (1.1) with respect to the strongly
convex functional (u ; u ).
0
Proof. Solvability. We shall denote by (u) the objective functional in
the problem (2.2) and by its inmum. Let um be a minimizing sequence,
i.e., (um ) ! as m ! 1. Therefore, fumg is bounded with respect to
the Lp-norm and hence is weakly compact, i.e., umk + u 2 Lp.
Taking into account the weak semicontinuity (¢ àãá᪮¬ ¡ë«® ¯à®áâ® \­¥¯à¥à뢭®áâì") of the operator Ah and lower weak continuity of the functional
J (u) in Lp (see [7, Theorem 1.9]) we have the chain of inequalities
(u) lim
inf (umk ) lim sup (umk ) = ;
k!1
k!1
(§¤¥áì ã¡à « ç¥àâã ­ ¤ ¤¢ã¬ï á«¥¢ , â ª¦¥ klim
inf § ¬¥­¨« ­ lim
inf
!1
k!1
| â ª ï § ¬¥­ ¢ë¯®«­¥­ ¢¥§¤¥ ¤ «¥¥, ¬­¥ ª ¦¥âáï, çâ® íâ® ¡®«ìè¥ á®®â¢¥âáâ¢ã¥â á¬ëá«ã)
which implies that u realizes the minimum in the problem (2.2).
3
Since the functionals
N (u) = kAhu ; f kq and J (u ; u )
0
are convex (see [1, Theorem 2.4]) and ku ; u kp is strictly convex as the p-th
(p > 1) power of a strictly convex norm, therefore, the extremal element u is
unique.
Convergence. Rename u
by u. Let u^ be the normal solution of equation (1.1) with respect to the functional (u ; u ) = ku ; u kp + J (u ; u ),
(á«¥¢ ®â §­ ª à ¢¥­á⢠ã¡à « b ­ ¤ u) i.e.,
0
0
0
0
(^u ; u ) = min f
(u ; u ) j Au = f g:
0
0
Since the solution set of equation (1.1) is convex and closed and the functional
(u ; u ) is a strictly convex weakly lower semicontinuous functional, it follows
that there exists a unique normal solution.
From the evident inequality
0
;
(u) = kAhu ; f kq + ku ; u kp + J (u ; u )
;
(^u) (hku^k + )q + ku^ ; u kp + J (^u ; u )
0
0
0
0
we have the estimate
ku ; u kp (hku^k + )p= + ku^ ; u k + J (^u ; u ):
0
0
0
(2.3)
(¯®-¢¨¤¨¬®¬ã, ¢ ¯à ¢®© ç á⨠¢ 1-¬ á« £ ¥¬®¬ ¤®«¦­ ¡ëâì á⥯¥­ì q
¢¬¥áâ® p, ¢â®à®¥ á« £ ¥¬®¥ (­®à¬ ) ¤®«¦­® ¨¬¥âì á⥯¥­ì p ?? | ¥á«¨
áà ¢­¨âì á ¯®á«¥¤­¥© áâப®© ­¥à ¢¥­á⢠¯¥à¥¤ (2.3))
If the relationship between the parameters (; h) is such as in the assumption
of the theorem, then the above estimate implies the boundedness of fu ;h g.
Hence, for some sequence (§¤¥áì \sequence" ¯® á¬ëá«ã «ãçè¥, 祬 \subsequence") (k ; hk ) k we have
(
uk + u; k ! 1;
)
(2.4)
i.e., it converges weakly in Lp. Then, taking into account the previous relations,
we obtain
kAu ; f k lim
inf kAuk ; f k
k!1
lim
inf [k (uk )] =q lim sup [k (^u)] =q
k!1
k!1
;
= lim sup (hk ku^k + k )q + k ku^ ; u kp + J (^u ; u ) =q = 0;
1
1
0
1
0
k!1
i.e., u is a solution of equation (1.1).
From relations (2.3) and (2.4) we get
ku ; u kp + J (u ; u ) lim
inf kuk ; u kp + J (uk ; u )
k!1
0
0
0
4
0
lim sup kuk ; u kp + J (uk ; u ) ku^ ; u kp + J (^u ; u );
0
0
0
0
k!1
which implies that u coincides with the normal solution u^. Moreover, we have
k ; u kp + J (uk ; u ) = ku
lim
k
u
^ ; u kp + J (^u ; u );
k!1
0
0
0
0
(¢ «¥¢®© ç áâ¨ à ¢¥­á⢠¯®áâ ¢¨« ᪮¡ª¨ [ ]) which together with the weak
lower semicontinuity of the functionals implies the convergence
lim kuk ; u kp = ku^ ; u kp;
lim J (uk ; u ) = J (^u ; u ): (2.5)
k!1
k!1
0
0
0
0
0
(¢ ¯à ¢®© ç á⨠1-£® à ¢¥­á⢠­ ¤® ¯à®áâ® u^ ??) Since for p > 1 the space
Lp is a uniformly convex space, therefore, relations (2.4) and (2.5) imply strong
convergence in Lp (see [9, p. 37])
lim kuk ; u^k = 0:
(2.6)
k!1
The above arguments imply that u^ is a unique limit point, hence (2.5) and
(2.6) are true for the whole sequence. The theorem is proved. Corollary 2.1. Assume that we know a priori that the normal solution u^
belongs to a convex closed set K U . Then the conclusion of the theorem
remains valid if in the regularization method (2.2) the space U is replaced by
the set K . (\set" ¢¬¥áâ® \subset", â.ª. ¢ àãá᪮¬ ¡ë«® \¬­®¦¥á⢮", çâ®
«ãçè¥ á®®â¢¥âáâ¢ã¥â á¬ëá«ã, 祬 \¯®¤¬­®¦¥á⢮". €­ «®£¨ç­® § ¬¥­¨«
¨ ¢ Theorem 2.2)
The above results on the convergence of regularized solutions can be extended to the case of nonlinear operators A and Ah. Namely, the following
theorem is true.
Theorem 2.2. Let A and Ah be nonlinear sequentially weak closed oper-
ators (see [9, p. 61]) acting from Lp(D) into Lq (S ) with the following approximation condition: for every bounded set Q U we have
sup fkA(u) ; Ah(u)k j u 2 Qg ! 0; h ! 0:
(¡ë«® :::x 2 :::) Then that part of conclusion of Theorem 2.1 which concerns
the convergence of regularized solutions is valid if we replace the usual convergence in Lp by the -convergence of the optimal sets U ;h to the set of
normal solutions U^ , i.e.,
sup
inf ku ;h ; u^k ! 0; ; h ! 0:
(
u(;h) 2U (;h)
u^2U^
(
)
)
The proof is similar to that of Theorem 2.1.
Corollary 2.2. The conclusion of Theorem 2.1 remains valid if instead of
the continuity of the operators A and Ah we require only their closedness.
5
In some cases during reconstruction of discontinuous solutions of equation (1.1) the need arises to use a stabilizer (u) with a greater regularizing
eect in order to obtain the convergence of approximate solutions in a norm
stronger than the norm of Lp but weaker than that of Wpn (n 1). For this
aim (in the one-dimensional case) it is natural to use the Sobolev space with
fractional derivative. Let us consider this in detail. As is known [14], the
one-sided Riemann { Liouville integral of fractional order of a function ' is
dened by the formula
Z t
1
u(t) = Ia '(t) = ;( ) (t ;'(ss)) ; ds:
(2.7)
a
Introduce the following norm on the set of functions u(t) representable in the
form (2.7) with ' 2 Lp(a; b):
+
kukp =
where
1
Z b
a
ju(t)jp dt +
Z b
a
jDa u(t)jp dt;
+
d Z t u(s) ds
;(1 ; ) dt a (t ; s)
is the left-side fractional derivative of order .
We shall denote this space by Wp (a; b). For 1 < p < 1 it is a complete
uniformly convex space, and for p = 2 it is a Hilbert space. Its norm is stronger
than the norm of Lp(a; b) but weaker than the norm of Wp (a; b) (see [4]), and
all these spaces are continuously embedded. Thus, we have a scale of spaces
in our disposal, whose norms can be used as a stabilizer in the Tikhonov
regularization in approximation of discontinuous solutions.
Theorem 2.3. Assume
that in the problem (2.2) the stabilizer is
p
(u ; u ) = ku ; u kWp (0 < < 1, 0 < p < 1) (¯®ç¥¬ã p > 0 ­¥
p > 1 | á¬. ¯à¥¤ë¤ã騩 ¡§ æ ¨ ­ ç «® ¤®ª § ⥫ìá⢠, £¤¥ ᪠§ ­®
p > 1 ??) and the space is U = Wp (a; b). Under assumptions of Theorem 2.1
or 2.2 the problem (2.2) is uniquely solvable and the strong convergence of the
regularized solutions takes place:
lim ku ;h ; u^kWp = 0:
;h!
1
Da+ u(t) =
1
0
0
(
)
0
Proof. Since Wp and Lq (1 < p; q < 1) are uniformly convex spaces,
one can use known techniques (see, e.g., [9]) to prove the convergence of the
extremals u. Corollary 2.3. If the normal solution u^ belongs to Lq , where q =
p=(1 ; p), then lim;h! ku ;h ; u^kq = 0. (­ ¢¥à­®¥ ­ ¤® k:::kLq ??)
The proof follows from the well-known fact that for the given relationship
between the parameters the operator Ia : Lp ! Lq (­ ¢¥à­®¥ ­ ¤® I::: ::: ??)
is bounded (see [14, Theorem 3.5]).
0
(
)
+
6
3 Subgradient methods for solving the regularized problem
The possibility to apply iterative processes of gradient type in minimization
problems can be realized under the condition that there exists at least the
subgradient of the objective functional. In our case it is very dicult to study
the subdierentiability of the total variation functional J (u), which enters in
the objective functional of the problem (2.2), in the space Lp. Therefore, the
basic idea proposed here is as follows. We construct a minimizing sequence in
the problem (2.2) by the subgradient method not in the space Lp(D) but in
a smoother space Wp (D), in particular, in W (D). In this case for a function
u 2 W (D) the functional (1.2), due to the Green formula, takes the form
1
1
2
1
2
J (u) =
Z
D
jruj dx;
jruj =
m
X
i=1
(@u=@xi)
2
1=2
:
This ensures the subdierentiability of J in W (D) (see [8, p. 210]) and allows
us to use the technique of Hilbert space in proving the convergence of a class
of methods of subgradient type.
3.1. The possibility to construct, in principle, a minimizing sequence from
smoother functions u 2 W (D) in the problem (2.2) follows from the next
lemma.
Lemma 3.1. For every function of nite total variation u 2 U = f v j v 2
Lp(D), J (v) < 1g, there exists a sequence of functions uj 2 C 1(D) such that
lim kuj ; ukLp = 0 (p > 1); jlim
J (uj ) = J (u):
j !1
!1
1
2
1
2
(¢ âà¥å ¬¥áâ å ­ ¯¨á « uj ¢¬¥áâ® uj | ¢ à ¬ª å ¤ ­­®© áâ âì¨ íâ® ¡®«¥¥
¯à¨¢ëç­ ï § ¯¨áì, ⥬ ¡®«¥¥ çâ® ¤ «¥¥ ­¨¦­¨© ¨­¤¥ªá ¯à¨¬¥­ï¥âáï ¤«ï
á¥â®ç­ëå ä㭪権)
The lemma can be proved, for example, by modifying slightly the technique
developed in [7] (Theorem 1.17) for the case p = 1.
To justify the convergence of the subgradient method we consider the case
of Hilbert spaces, i.e., p = q = 2.
For approximation of the regularized solution u, i.e., the solution of the
problem (2.2) (with p = q = 2), we take one of the variants of the subgradient
method
uk = uk ; k @ (uk )=k@ (uk )k;
(3.1)
where @ (uk ) is an arbitrary subgradient of the objective functional at the
point uk .
Theorem 3.1. Let k > 0, k ! 0, P1k k = 1, and u 2 W . Then
the iterations (3.1) have the following properties:
+1
=0
7
0
1
2
1) limi!1 (uki ) = , where ki is such that min ki (uk ) = (uki );
2) limi!1 kuki ; uk = 0;
R
3) limi!1 J (uki ; u ) = limi!1 D jr(uki ; u )j dx = J (u ; u ).
1
0
0
0
Proof. Since the functional (u) is continuous in the space W , therefore,
1
2
to prove the fact that uki is a minimizing sequence, one can use the standard
technique developed for the case of the space Rm (see, e.g., [6, 10, 13]). Namely,
in a similar way one can show that for every " > 0 there exists an index
k = k(") such that
uk 2 G" = f u 2 W j (u) < + "g:
Then (uki ) F + " (çâ® â ª®¥ F ??) for ki k since (uki ) (uki ). Thus, we have shown that uki is a minimizing sequence, i.e., property 1 is proved.
The last fact implies that fuki g is bounded in L and, hence, there exists
a weakly convergent subsequence. Without any loss of generality, we reckon
that this subsequence coincides with the whole sequence,
uki + u 2 L ; i ! 1:
Taking into account the lower weak continuity (¢ àãá᪮¬ ¡ë«® \¯®«ã­¥¯à¥à뢭®áâì") of the norms and the total variation functional J (u), we get
(u) lim
inf (uki ) = ilim
(uki ) = ;
(3.2)
i!1
!1
1
2
+1
2
2
i.e., u coincides with u. Furthermore, we have
kAhuki ; yk lim
inf kAhuki ; y k ;
i!1
kuk lim
inf kuki k ;
J (u) lim
inf J (uki ):
i!1
i!1
2
2
2
2
(çâ® â ª®¥ y ¢ ¤¢ãå ¬¥áâ å ¢ 1-© áâப¥ ?? ’ ¬ ¬®¦­® ¡ë«® ¡ë ᪮॥
®¦¨¤ âì f ) Combining the last relations with (3.2) and taking into account
the fact that u = u is the unique limit point of the sequence uki , we get
properties 2 and 3. Analysis of the proof of Theorem 3.1 shows that consideration of the case
p = q = 2 during justication of the subgradient method for the problem (2.2)
was caused by the fact that such a choice ensures the continuity of the objective
functional in the space W (D). This property guarantees the subdierentiability of the functional and was essentially used in the proof of property 1.
Therefore, using the embedding theorems W (D) ! Lp(D) (see [2]), we can
consider the more general (non-Hilbert) case in the problem (2.2) and prove
the convergence of iterations in Lp(D) under some additional constraints on
the dimension of the space and on the domain D. For this, it is sucient that
the corresponding embedding operator be continuous.
1
2
1
2
8
Theorem 3.2. Let the domain D satisfy the cone condition (see [2, p. 66])
and let the assumptions of Theorem 3.1 be fullled for the iterative process (3.1). Then the conclusion of Theorem 3.1 is true if property 2 is replaced
by the property
2 0) limi!1 kuki ; ukLp = 0,
where
a) 1 < q p, 2 p 2m=(m ; 2) if m > 2;
b) 1 < q p, 2 p < 1 if m = 2;
c) 1 < q p, 2 p < 1 if m < 2 and D is a bounded domain.
(¢ àãá᪮¬ ¢ ¯®á«¥¤­¥¬ ¯ã­ªâ¥ ¡ë«® ¯®-¤à㣮¬ã: 1 < q p < 1. …᫨
§¤¥áì ¢á¥ ¦¥ ¯à ¢¨«ì­®, â® ­¥¯®­ïâ­®, § 祬 ¢®®¡é¥ ¤¥« âì ¯ã­ªâ á) |
¢¥¤ì ¬®¦­® ¯à®áâ® ¯à®¤®«¦¨âì ¯ã­ªâ b): \... or if m < 2 and D is a bounded
domain" ??)
The proof follows from Theorem 3.1 and the embedding theorem (see
[2, Theorem 5.4]). By that theorem, the embedding operator I : W ! Lp is
continuous under the accepted relationships between the parameters p and m.
1
2
Remark 3.1. The variant of the subgradient method in the form (3.1)
was chosen only as an illustration of the methods proposed in this research.
Under an additional assumption that the subdierential @ (u) (§ ¬¥­¨« x ­ u §¤¥áì ¨ ¢ â ª®¬ ¦¥ á«ãç ¥ ¢ ­ ç «¥ ⥮६ë 3.3) is bounded, Theorem 3.1
remains valid if, instead of the process (3.1), we use its dierent modications:
relaxation methods, averaging methods, "-gradient (¢ àãá᪮¬ ¡ë«® \"-áã¡£à ¤¨¥­â­ë©" ??) method and other more eective schemes (see [6, 10, 13]).
In particular, the following theorem is valid.
Theorem 3.3. Let the subdierential @ (u) in the problem (2.2) (with
p = q = 2) be a bounded mapping in W . Then:
a) if in the assumptions of Theorem
3.1 the sequence of parameters k
P1
satises the additional relation k k < 1, then properties 1{3 are
valid for the whole sequence uk ;
b) if is known and k = ((uk ) ; )=kr(uk )k, then for the whole
sequence uk properties 1{3 are valid and the following estimate takes
place: (¢ ¯à¨¢®¤¨¬®© ­¨¦¥ ®æ¥­ª¥ ¯à¥¤¥« ­ ¢¥à­®¥ ¤®«¦¥­ ¡ëâì ¯à¨
k ! 1 ??)
p k
lim
inf
k ( (u ) ; ) = 0:
i!1
1
2
=0
2
The proof of property 1 is similar to the case of Rm (see [6, 13]) and the
proof of properties 2 and 3 is similar to that of Theorem 3.1.
9
Remark 3.2. Under the assumptions of Theorem 3.2, the conclusion of
Theorem 3.3 remains valid in the non-Hilbert case if property 2 is replaced by
property 20.
3.2. Assume that along with the basic equation (1.1) we know some additional information on the normal solution in the form of a system of convex
inequalities (à¨¢®¤¨¬ãî ­¨¦¥ ä®à¬ã«ã ¡ë«® ¡ë ªªãà â­¥¥ § ¯¨á âì â ª:
u^ 2 K = :::)
K = u 2 L (D) hi (u) 0; i = 1; 2; : : : ; m :
If we introduce a convex nondierentiable functional H (u) = maxfhi(u) j 1 i mg, then the a priori set K (§ ¬¥­¨« \subset" ­ \set") can be written
in the equivalent form K = fu 2 L (D) j H (u) 0g. If we want to take into
account the a priori information on the solution in our algorithm, it is natural
to consider, instead of (2.2), the conditional extremum problem
minf(u) j u 2 K g:
(3.3)
Let us assume that the functionals hi(u) (¡ë«® ...(x)) are lower semicontinuous
in Lp(D) and continuous in W (D). Moreover, we assume that the Sleiter
condition is fullled for the system of convex inequalities that determines the
subset K .
For approximate solution of system (3.3) we can use the subgradient
method (3.1) if we modify it beforehand [10], replacing the subgradient
@ (uk ) by an element gk 2 G(uk ), where
2
2
1
2
8
>
<@ (u);
if h(u) 0; (ª ª®© §­ ª? < ??)
G(u) = >
if h(u) = 0;
:
@h(u);
if h(u) > 0;
and convfB g is the convex hull of a set B .
Theorem 3.4. For the modied method the conclusions of Theorems 3.1{3.3 are valid.
The proof repeats in essence the arguments in the proof of Theorem 3.1.
Remark 3.3. Since the operator I : W ! W is continuous, therefore,
the analogs of Theorems 3.1{3.4 on the strong convergence of the subgradient
method and its modications in the space W are true for the problem (2.2)
with the stabilizer (u ; u ) = ku ; u kW (p = q = 2).
convf@ (u) [ @h(u)g;
1
2
0
2
2
0 2
2
Theorems 2.1 and 3.1{3.4 allow us to construct a two-stage regularizing
algorithm (᮪à 饭¨¥ RA ­¨£¤¥ ­¥ ¨á¯®«ì§ã¥âáï, ¯®í⮬ã ï ¥£® ã¡à «) for
the problem (1.1). On its rst stage, the Tikhonov method with a nonsmooth
(nondierentiable) stabilizer (u) is applied to an equation, and on the second
stage, the subgradient method (3.1) or its modied analog with appropriately
chosen parameter is used.
10
4 Discrete approximation of the extremal
problem
In numerical implementation of the above regularizing algorithm, one more
stage is unavoidable. This stage is nite-dimensional (discrete) approximation
of the process (3.1) or innite-dimensional problem (2.2).
Let D be an m-dimensional rectangular domain, for example, a unit cube.
Construct a uniform grid with step h = 1=n for every variable and introduce
the discrete analog Rmn of the space Rm
Rmn = x 2 Rm x = (j h; : : : ; jmh); j ; : : : ; jm = 0; 1; 2; : : :
and grid functions un : Rmn ! R; here Rmn is the nm -dimensional space of vectors un (â äà § ­¥ ®ç¥­ì ¯®­ïâ­ , ¢¥¤ì Rmn | íâ® á¥âª , â.¥. ¬­®¦¥á⢮
1
1
â®ç¥ª, ¨ ¬®¦­® «¨ ­ §ë¢ âì ¥£® ¯à®áâà ­á⢮¬ ¢¥ªâ®à®¢? ˆ ­ã¦­ «¨
íâ äà § ¢®®¡é¥, ¢¥¤ì ¬ë ®¯à¥¤¥«¨«¨ Rmn ¨ ᪠§ «¨, çâ® un | á¥â®ç­ ï
äã­ªæ¨ï, í⮣® ¢¯®«­¥ ¤®áâ â®ç­®??) and the subscript n means that the
function is given on the grid with step h = 1=n.
Introduce the family of restricting operators
P=
n
pn (pn
u)(x) = h;m
Z
o
!n (x)
u(y) dy ;
(4.1)
where !n (x) is the elementary cell with volume hm and one of the vertices at
the point x = (x ; x ; : : : ; xm), i.e., !n(x) = fy 2 Rm j xj ; h < yj xj g.
1
2
Denition 4.1 [15, 16]. A sequence of spaces fUn g generates a discrete
approximation of a space U if there exists a family P = fpn g of restricting
operators pn : U ! Un satisfying the following properties:
1) pn U = Un 8n;
2) limn!1 kpn ukUn = kukU ; (­ ¤® «¨ ¯®­¨¬ âì, çâ® íâ® 8u 2 U ??)
3) limn!1 kpn (au + a0u0) ; apnu ; a0pn u0kUn = 0 8u; u0 2 U , 8a; a0 2 R:
As is known [2, 10, 15, 16], the family P dened by formula (4.1) generates
a discrete approximation of the space Lp(D) by a sequence of spaces flpng with
the norm
X
kun klnp = hm jun(x)jp; Dn = D \ Rmn:
x2Dn
In a similar way we introduce a discrete approximation of the space Lq (S ) by
a sequence flqng by means of the restricting operators Q = fqng. The discrete
approximation of the spaces Lp and Lq generates the discrete and discrete weak
convergences of elements (functions) and operators [2, 15, 16]:
P
()
P
() 8vn ;P! v =) nlim
hv ; u i = hv; ui;
!1 n n
un ; ! u
un ; + u
lim kun ; pn uk = 0;
n!1
0
11
PQ
() 8un ;P! u =) Anun ;Q! Au;
() 8un ;P+ u =) Anun ;Q+ Au;
An ; ! A
PQ
An ; + A
here h; i is the duality relation or inner product (in the Hilbert case), P 0
is the family of restricting operators that generates the dual approximation
of Lp (¬®¦¥â ¡ëâì ­ ¤® Lp0 ??) (1=p + 1=p0 = 1) coordinated with the
approximation of Lp (see [2]), and the symbols \; !" and \; +" denote the
discrete and discrete weak convergences, respectively.
In what follows, as a rule, we shall omit the letters P and Q over the
symbols of discrete convergence and the subscripts in the notation of norms
when it does not lead to misunderstanding.
Lemma 4.1 [2]. If the space U is reexive and separable, then every
bounded sequence fung, un 2 Un , is discretely weakly compact, i.e., there is a
subsequence funk g and an element u 2 U such that unk ; + u.
Furthermore, for discrete weak convergence the following properties take
place [2, 15, 16, 17]:
a) un ; ! u =) un ; + u;
b) un ; + u =) kuk lim
inf kunk;
n!1
c) un ; + u =) kunk C ;
d) un ; + u, u0n ; + u0, an ! a, a0n ! a0 =) anun + a0n u0n ; + au + a0u0.
Denition 4.2 [17]. A pair U; fUng has the discrete Emov { Stechkin
(ES) property if the spaces U and Un are reexive and
un ; + u; nlim
ku k = kuk =) un ; ! u:
!1 n
(¢ àãá᪮¬ ¡ë«® lim inf, §¤¥áì lim ??)
The function dened for 0 " 2 by the relation
U (") = inf 1 ; ku + vk=2 kuk = kvk = 1; ku ; vk "
(¢ àãá᪮¬ ¢ ª®­æ¥ í⮩ ä®à¬ã«ë ¡ë«® ::: = "g) is called the Clarkson
convexity modulus of the space U . It is clear that U is uniformly convex if
and only if U (") > 0 for " > 0. As is known, for p > 1 the spaces Lp and Wp
(¡¥§ ¢¥àå­¥£® ¨­¤¥ªá ??) are uniformly convex.
Lemma 4.2. Let Un be spaces that generate a discrete approximation of
a space U . Let the spaces Un be uniformly convex and
inf
(") = (") > 0 8" > 0:
n Un
(4.2)
Then the pair U; fUng (¯®áâ ¢¨« ᪮¡ª¨ f g ¢¬¥áâ® ( )) has the discrete ES
property.
12
Proof. Let un 2 Un, u 2 U , and let
un ; + u; nlim
ku k = kuk;
!1 n
(4.3)
where un 6= n (n is the zero element of the space Un). Let vn = un=kun k and
v = u=kuk. Property d) of discrete (¢ àãá᪮¬ §¤¥áì ¥é¥ ¡ë«® \á« ¡®©" ??)
convergence and (4.3) imply that vn ; + v. Let us prove that vn ; ! v. Really, if we assume the contrary, then for some " > 0 and for some subsequence
(­ ¯¨á « \sub..." | \¯®¤¯®á«¥¤.") fvnk g the inequality kvnk ; pnk vk " > 0
holds. Since kpnk vk ! 1, one can assume that kvnk ; pnk v=kpnk vk k " > 0.
In view of the uniform convexity of Un and condition (4.2) we have
pnk v 1 ; (") 1 ; ("):
1
vnk +
Un
2
kpnk vk Obviously, 12 vnk + kppnk vvk ; + v, (¤®¡ ¢¨« v ¢ §­ ¬¥­ ⥫¥ ¯®¤ §­ ª®¬
nk
­®à¬ë) hence, by property b),
1 = kvk lim
inf k(vnk + pnk v)=2k
k!1
lim sup k(vnk + pnk v)=2k 1 ; ("):
k!1
(­¥ ­ã¦­® «¨ ¢ í⮩ ¢ëª«îç­®© ä®à¬ã«¥ ç«¥­ë pnk v à §¤¥«¨âì ­ kpnk vk,
ª ª íâ® ¡ë«® ᤥ« ­® à ­¥¥ ??) The contradiction obtained proves that
vnk ; ! v and, consequently, un ; ! u. Corollary 4.1. The pair Lp; flpng possesses the discrete ES property.
Really, if we denote by rn : lpn ! Lp the operators of piecewise-constant
interpolation, then krnunkLp = kunklnp and rn lpn Lp. This implies
inf
n (") Lp (") > 0 8" > 0:
n lp
Lemma 4.2 was announced (without proof) in the author's paper [19].
Let us turn to the nite-dimensional approximation of the minimization
problem (2.2). We associate this problem with a family of nite-dimensional
extremal problems
n
min kAnun ; fnkqlnq + kun ; un kplnp + Jn(un ; un )
where
Jn (un) = sup
X
hmun(x)
0
m
X
@j vjn(x)
j =1
x2Dn
un(x) ; un(x ; hej )
@j un (x) =
h
0
;
13
vn
2C
1
0
o
un 2 lpn ;
(Dn ; Rmn);
jvn(x)j 1 ;
ej = ( 0|; :{z: : ; 0}; 1; 0; : : : ; 0 );
j ;1
(4.4)
(¢ ®¯à¥¤¥«¥­¨¨ ᨬ¢®« @j ¡ë«® ¡ë «ãçè¥ ¢¬¥áâ® ¡ãª¢ë u, ª®â®à ï ¢ íâ¨å
ä®à¬ã« å ¨¬¥¥â ¤à㣮© á¬ëá«, ¨á¯®«ì§®¢ âì ¨«¨ vj , ¨«¨ ª ªãî-â® ¤àã£ãî
\­¥©âà «ì­ãî" ¡ãª¢ã, ­ ¯à¨¬¥à, w ??)
;
vn(x) = v n(x); v n(x); : : :; vmn (x) ;
1
2
Dn = D \ Rmn;
An : lpn ! lqn are linear bounded operators, fn 2 lqn, and un 2 lpn.
For proving the main theorem on convergence of discrete approximations
we need a property of the functionals J; fJng (§¤¥áì ¨ ¤ «¥¥ ¢ «¥¬¬¥ ᤥ« «
᪮¡ª¨ f g ¢¬¥áâ® ( )) which is established in the following lemma.
0
Lemma 4.3. The pair J; fJng is discretely weakly lower semicontinuous,
i.e., (¢ ¯à¨¢®¤¨¬®© ­¨¦¥ ä®à¬ã«¥ ­ ¯¨á ­ ¯à¥¤¥« ¯à¨ k ! 1, ¢ â® ¢à¥¬ï
ª ª ¢ ¢ëà ¦¥­¨¨, ®â ª®â®à®£® ¡¥à¥âáï ¯à¥¤¥«, á⮨⠨­¤¥ªá n ??)
un ; + u =) J (u) lim
inf J (u ):
k!1 n n
Proof. At rst we verify that
m
X
j =1
@j vjn (x) ; ! div v(x):
(4.5)
Dene the family of restricting operators P = fpn g, pn : Lp \ C 1 ! lpn, by the
formula
(pn u)(x) = u(x); x 2 Dn = D \ Rmn:
(4.6)
(¢ (4.6) ¤®¡ ¢¨« ç¥àâã ­ ¤ pn ) As is known [18], this family generates discrete
convergence which is equivalent to the convergence introduced earlier with the
use of the family P from (4.1). Thus, it suces to prove that for every v 2 C 1
we have
m
X
X
m
h
@j vjn (x) ; div v(x) ! 0; n ! 1:
x2Dn
j =1
The last relation follows from the evident estimate j@j vjn ; @v(x)=@xjj = O(h)
for vj 2 C 1. (§¤¥áì 㠡㪢ë v ¬®¦­® ¡ë«® ¡ë ®¦¨¤ âì ¨­¤¥ªá n ¨«¨ jn ??)
Now assume that jv(x)j 1, v 2 C 1(D; Rm), and un ; + u. Then
from (4.5) and the denition of discrete weak convergence we have the chain
of relations
0
hu; div vi =
lim
inf sup
n!1
Z
D
u(x) div v(x) dx = nlim
!1
X
x2Dn
hm u
n (x)
m
X
j =1
@j vjn 14
X
x2Dn
hmun(x)
m
X
j =1
v 2 (C01; Rmn);
@j vjn(x)
jvn(x)j < 1
(— áâì v 2 (C 1; Rmn) ¢ë£«ï¤¨â ­¥áª®«ìª® áâà ­­®. ‚®-¯¥à¢ëå, ¯® «®£¨ª¥
®¡®§­ 祭¨© ¬®¦­® ®¦¨¤ âì § ¯¨áì ¢¨¤ v 2 C (D; R) (á à §­ë¬¨ ¨­¤¥ªá ¬¨). ‚®-¢â®àëå, áà ¢­¨¢ ï á ®¯à¥¤¥«¥­¨¥¬ Jn ¯®á«¥ (4.4), ¬®¦­® §¤¥áì
®¦¨¤ âì vn 2 C (Dn ; Rmn) ??)
0
1
0
lim
inf J (u ):
n!1 n n
Passing to the supremum over v(x), jv(x)j 1, on the left-hand side, we obtain
the required property. (ª®­¥æ ¤®ª § ⥫ìá⢠)
Introduce the notation n(u) and u^n for the objective functional and solution of the problem (4.4). As before, u is a solution of the problem (2.2)
and rn are the operators of piecewise-constant interpolation of a grid solution.
Based on the discrete approximation introduced above, we dene the discrete
convergence of elements and operators. In the next theorem we formulate sucient conditions for discrete convergence of nite-dimensional approximations,
i.e., the conditions that guarantee the convergence (as n ! 1) of solutions of
the nite-dimensional extremal problems (4.4) to the solution of the regularized innite-dimensional problem (2.2).
Theorem 4.1. Let the following approximation conditions be fullled:
An ; ! A; An ; + A;
(4.7)
fn ; ! f;
un ; ! u :
(­®¬¥à (4.7) ®â­®á¨âáï ⮫쪮 ª 1-© áâப¥ ¨«¨ ª ®¡¥¨¬ ??) Then the problem (4.4) has a unique solution u^n and the following properties of convergence
hold:
u^n ; ! u; nlim
kr u^ ; ukLp = 0;
!1 n n
lim J (^u ; un) = J (u ; u );
lim (^u ) = (u) = :
n!1 n n
n!1 n n
0
0
0
0
Proof. Solvability. Denote by dn the inmum in the problem (4.4) for
a xed parameter n. Let fukng be a minimizing sequence, i.e., ukn 2 lpn and
n(ukn) ! dn as k ! 1. Since all three terms of the objective functional
n are positive, the sequence fukng is bounded in lpn. Hence, there exists a
subsequence fukni g converging to an element u^n 2 lpn,
lim kuki ; u^nklnp = 0:
i!1 n
For every admissible vector vn(x) 2 C (Dn ; Rmn) we have
1
0
X
x2Dn
hmu^
n (x)
m
X
@j vjn(x) = ilim
!1
X
hmukni (x)
x2Dn
m
X
X
m uki (x)
lim
inf
sup
h
@j vjn(x)
n
i!1
j =1
x2Dn
j =1
15
m
X
j =1
@j vjn(x)
jvn(x)j 1
= lim
inf J (uki ):
i!1 n n
Passing to the supremum over vn on the left-hand side, we obtain
Jn (^un) lim
inf J (uki ):
i!1 n n
(4.8)
Now, taking into account the continuity of the operator An and the property
of lower semicontinuity of the functional Jn, which was established in (4.8), we
obtain the chain of inequalities
dn n(^un) lim
inf (uki ) = dn ;
i!1 n n
which implies that u^n is a solution of the problem (4.4). The uniqueness of
solution follows from the convexity of the rst and third terms in n(un) and
strict convexity of the norm kunkplnp .
By Lemma 3.1, for every function u 2 U = f v j v 2 Lp(D), J (v) < 1g
there exists a sequence uj 2 C 1(D) such that
lim kuj (x) ; u(x)kLp = 0;
lim J (uj ) = J (u):
j !1
j !1
(¢ âà¥å ¬¥áâ å ᤥ« « uj ¢¬¥áâ® uj , ª ª ¢ «¥¬¬¥ 3.1, â ª¦¥ ¢ 1-¬ ¯à¥¤¥«¥
ᤥ« « j ! 1, ¡ë«® i:::) Therefore, for every " > 0 there exists a function
u" 2 C 1(D) such that
= (u) (u") + ":
(4.9)
We now verify that
(u") = nlim
(p u );
!1 n n "
(4.10)
where fpn g is the family of restricting operators (projection (¡ë«® \drift" |
í⮠᪮॥ ¤à¥©ä, ¤¢¨¦¥­¨¥, §¤¥áì ¦¥ à¥çì ¨¤¥â ᪮॥ ® ¯à®¥ªæ¨¨ ­ 㧫ë
á¥âª¨) on the grid) given by formula (4.6). In view of the assumptions of the
theorem and the properties of discrete convergence, it suces to verify the
convergence
lim J (p u ) = J (u"):
n!1 n n "
Indeed, since u" 2 C 1(D) and vn 2 C (Dn ; Rmn), therefore, summing by
parts we get
1
0
jJ (u") ; Jn(pn u")j =
; sup
=
Z
D
X
Z
D
jru"(x)j dx
x2Dn
hmun (x)
m
X
j =1
@j vjn(x)
jru"(x)j dx
16
vn
2C
1
0
(Dn ; Rmn);
jvn(x)j 1 ; sup ;
X
=
Z
D
x2Dn
m
X
hm
j =1
jru"(x)j dx ;
X Z
x2Dn
!n (x)
vjn(x)@j u"n(x)
X
x2Dn
vn
2C
1
0
(Dn ; Rmn);
jvn(x)j 1 hmj@u"n (x)j
jru"(x) ; @u"n (x)j dx = O(h) ! 0; h ! 0 (n ! 1);
(¢ ¯®á«¥¤­¥© áâப¥ ¢ ¯à¥¤¥«¥ ¨­â¥£à¨à®¢ ­¨ï ¢¬¥áâ® !h ᤥ« « !n |
¨¬¥­­® â ª ®­ ¡ë« ¢¢¥¤¥­ , â ª¦¥ ¤®¡ ¢¨« ç¥àâã ­ ¤ @ . Œ­¥ ª ¦¥âáï,
íâã ç¥àâã ­¥ á⮨⠯த®«¦ âì ­ ¤ ­¨¦­¨¬ ¨­¤¥ªá®¬ ã @ | â ª «ãçè¥ á
â®çª¨ §à¥­¨ï ­ ¡®à .)
where the following notation is used:
;
@u"n (x) = @ u"n (x);@ u"n (x); : : :; @m u"n(x) ;
;
@j u"n(x) = u"n (x + hej ) ; u"n (x) =h:
1
2
Combining relations (4.9) and (4.10), we come to the estimate
lim sup dn = nlim
(^u ) lim sup n(pn u") + "
!1 n n
n!1
n!1
for arbitrary " > 0, which implies
lim sup dn :
n!1
The fact proved implies that the sequence fu^ng is bounded, hence, by
Lemma 4.1, it is discretely weakly compact, i.e., for some fn0g fng we have
u^n0 ; + u~ 2 Lp(D):
(4.11)
Based on the assumptions of the theorem, Lemma 4.3, and property b) of
discrete convergence, we derive the inequalities
(~u) lim
inf 0 (^u 0 )
n0 !1 n n
lim0 sup n0 (^un0 ) lim0 sup dn0 ;
n !1
n !1
which imply that u~ is a solution of the problem (2.2), i.e., u~ = u; moreover,
lim ku^ 0 kp = kukp;
n0 !1 n
lim J 0 (^u 0 ; u0n ) = J (u ; u0);
n0 !1 n n
(4.12)
(4.13)
(¢¬¥áâ® un ­ ¢¥à­®¥ ­ ¤® un0 ??)
0
0
lim kAn0 u^n0 ; fn0 kq = kAu ; f kq :
n0 !1
17
(4.14)
By Lemma 4.2 and its corollary, from relations (4.11) and (4.12) we conclude
that
u^n0 ; ! u:
(4.15)
Since u is the unique limit point, therefore, properties (4.12){(4.15) are true
for the whole sequence. As was established in [???], (­ ª ªãî à ¡®âã §¤¥áì
áá뫪 ?) the operators of piecewise-constant interpolation in the space Lp
possess the property
un ; ! u =) nlim
kr u ; ukLp = 0:
!1 n n
This completes the proof of the theorem. Remark 4.1. In the case of Hilbert spaces and a quadratic (¡ë«®
\squared") stabilizer, conditions (4.7) and (4.8) are necessary and sucient
conditions of convergence of discrete approximations in the Tikhonov regularization method (see [21]).
References
1. R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods
for ill-posed problems. Inverse Problems (1994) 10, No. 6, 1217{1229.
2. R. A. Adams, Sobolev Spaces. Academic Press, New York, 1975.
3. A. L. Ageev, Regularization of nonlinear operator equations in the class
of discontinuous functions. USSR Comput. Math. Math. Phys. (1980)
20, No. 4, 1{9.
4. V. A. Andrienko, Imbedding theorems for functions of one variable. In:
Reviews of Science. Calculus. 1970. VINITI (All-Union Inst. of Scientic and Technical Information), Moscow, 1971, pp. 203{254 (in Russian).
5. G. Chavent and K. Kunisch, Regularization of linear least squares problems by total bounded variation. ESAIM Control Optim. Calc. Var.
(1997) 2, 359{376.
6. V. F. Dem0janov and L. V. Vasil0ev, Nondierentiable Optimization.
Nauka, Moscow, 1981 (in Russian).
7. E. Giusti, Minimal Surfaces and Functions of Bounded Variation. Birkhauser Verlag, Basel|Boston, Mass., 1984.
8. A. D. Ioe and V. M. Tikhomirov, Theory of Extremal Problems. Nauka,
Moscow, 1974 (in Russian).
9. V. K. Ivanov, V. V. Vasin, and V. P. Tanana, Theory of Linear Ill-Posed
Problems and Applications. Nauka, Moscow, 1978 (in Russian).
18
10. I. V. Konnov, Methods of Nondierentiable Optimization. Kazan State
University, Kazan, 1993 (in Russian).
11. A. S. Leonov, Functions in several variables of bounded variation in illposed problems. Comput. Math. Math. Phys. (1996) 36, No. 9, 1193{
1203.
12. A. S. Leonov, An application of functions of bounded variation of several
variables to the piecewise-uniform regularization of ill-posed problems.
Dokl. Math. (1996) 54, No. 3, 918{922.
13. B. T. Polyak, Introduction to Optimization. Nauka, Moscow, 1983
(in Russian).
14. S. G. Samko, A. A. Kilbas, and O. I. Marichev, Integrals and Derivatives
of Fractional Order and Some Applications. Nauka i Tekhnika, Minsk,
1987 (in Russian).
15. F. Stummel, Diskrete Konvergenz linearer Operatoren. I. Math. Ann.
(1970/71) 190, No. 1, 45{92.
16. F. Stummel, Diskrete Konvergenz linearer Operatoren. II. Math. Z.
(1971) 120, 231{264.
17. F. Stummel, Discrete convergence of mappings. In: Topics in Numerical
Analysis (Proc. Roy. Irish Acad. Conf., Dublin, 1972). Academic Press,
London, 1973, pp. 285{310.
18. G. M. Vainikko, Analysis of Discretization Methods. Tartu University,
Tartu, 1976 (in Russian).
19. V. V. Vasin, Discrete approximation of innite-dimensional problems of
mathematical programming. In: Methods of Optimization and Applications. Siberian Power Inst., Siberian Branch of the USSR Acad. Sci.,
Irkutsk, 1979, pp. 44{48 (in Russian).
20. V. V. Vasin, A general scheme for discretization of regularizing algorithms in Banach spaces. Soviet Math. Dokl. (1981), 23, 494{498.
( áâ âìî [20] áá뫪¨ ¢ ⥪á⥠­¥â!!  ¤® ¨«¨ ¤®¡ ¢¨âì ¢ ⥪áâ
áá뫪㠭¥ ­¥¥, ¨«¨ ã¡à âì ¥¥ ¨§ ᯨ᪠«¨â¥à âãàë.)
21. V. V. Vasin and A. L. Ageev, Ill-Posed Problems with A Priori Information. VSP, Utrecht, The Netherlands, 1995.
19