Stochastic methods based on VU-decomposition methods

Transcription

Stochastic methods based on VU-decomposition methods
Hindawi Publishing Corporation
Mathematical Problems in Engineering
Volume 2014, Article ID 894248, 5 pages
http://dx.doi.org/10.1155/2014/894248
Research Article
Stochastic Methods Based on VU-Decomposition Methods for
Stochastic Convex Minimax Problems
Yuan Lu,1 Wei Wang,2 Shuang Chen,3 and Ming Huang3
1
Normal College, Shenyang University, Shenyang 110044, China
School of Mathematics, Liaoning Normal University, Dalian 116029, China
3
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
2
Correspondence should be addressed to Wei Wang; wei 0713@sina.com
Received 6 August 2014; Revised 29 November 2014; Accepted 29 November 2014; Published 4 December 2014
Academic Editor: Hamid R. Karimi
Copyright © 2014 Yuan Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper applies sample average approximation (SAA) method based on VU-space decomposition theory to solve stochastic
convex minimax problems. Under some moderate conditions, the SAA solution converges to its true counterpart with probability
approaching one and convergence is exponentially fast with the increase of sample size. Based on the VU-theory, a superlinear
convergent VU-algorithm frame is designed to solve the SAA problem.
1. Introduction
In this paper, the following stochastic convex minimax problem (SCMP) is considered:
min𝑛 𝑓 (π‘₯) ,
(1)
𝑓 (π‘₯) = max {𝐸 [𝑓𝑖 (π‘₯, πœ‰)] : 𝑖 = 0, . . . , π‘š} ,
(2)
π‘₯βˆˆπ‘…
where
and the functions 𝑓𝑖 (π‘₯, πœ‰) : 𝑅𝑛 β†’ 𝑅, 𝑖 = 0, . . . , π‘š, are convex
and 𝐢2 , πœ‰ : Ξ© β†’ Ξ βŠ‚ 𝑅𝑛 is a random vector defined
on probability space (Ξ©, Ξ₯, 𝑃); 𝐸 denotes the mathematical
expectation with respect to the distribution of πœ‰.
SCMP is a natural extension of deterministic convex
minimax problems (CMP for short). The CMP has a number
of important applications in operations research, engineering
problems, and economic problems. While many practical
problems only involve deterministic data, there are some
important instances where problems data contains some
uncertainties and consequently SCMP models are proposed
to reflect the uncertainties.
A blanket assumption is made that, for every π‘₯ ∈ 𝑅𝑛 ,
𝐸[𝑓𝑖 (π‘₯, πœ‰)], 𝑖 = 0, . . . , π‘š, are well defined. Let πœ‰1 , . . . , πœ‰π‘ be a
sampling of πœ‰. A well-known approach based on the sampling
is the so-called SAA method, that is, using sample average
value of 𝑓𝑖 (π‘₯, πœ‰) to approximate its expected value because the
classical law of large number for random functions ensures
that the sample average value of 𝑓𝑖 (π‘₯, πœ‰) converges with
probability 1 to 𝐸[𝑓𝑖 (π‘₯, πœ‰)] when the sampling is independent
and identically distributed (idd for short). Specifically, we can
write down the SAA of our SCMP (1) as follows:
min𝑛 𝑓̂𝑁 (π‘₯) ,
π‘₯βˆˆπ‘…
(3)
where
𝑓̂𝑁 (π‘₯) = max {𝑓̂𝑖𝑁 (π‘₯) : 𝑖 = 0, . . . , π‘š} ,
1 𝑁
𝑓̂𝑖𝑁 (π‘₯) := βˆ‘ 𝑓𝑖 (π‘₯, πœ‰π‘— ) .
𝑁 𝑗=1
(4)
The problem (3) is called the SAA problem and (1) the true
problem.
The SAA method has been a hot topic of research in
stochastic optimization. Pagnoncelli et al. [1] present the SAA
method for chance constrained programming. Shapiro et al.
[2] consider the stochastic generalized equation by using
the SAA method. Xu [3] raises the SAA method for a class
of stochastic variational inequality problems. Liu et al. [4]
2
Mathematical Problems in Engineering
give the penalized SAA methods for stochastic mathematical
programs with complementarity constraints. Chen et al. [5]
discuss the SAA methods based on Newton method to
the stochastic variational inequality problem with constraint
conditions. Since the objective functions of the SAA problems
in the references talking above are smooth, then they can be
solved by using Newton method.
More recently, new conceptual schemes have been developed, which are based on the VU-theory introduced in
[6]; see else [7–11]. The idea is to decompose 𝑅𝑛 into two
orthogonal subspaces V and U at a point π‘₯, where the
nonsmoothness of 𝑓 is concentrated essentially on V and the
smoothness of 𝑓 appears on the U subspace. More precisely,
for a given 𝑔 ∈ πœ•π‘“(π‘₯), where πœ•π‘“(π‘₯) denotes the subdifferential
of 𝑓 at π‘₯ in the sense of convex analysis, then 𝑅𝑛 can be
decomposed into direct sum of two orthogonal subspaces,
that is, 𝑅𝑛 = U βŠ• V, where V = lin(πœ•π‘“(π‘₯) βˆ’ 𝑔), and U = VβŠ₯ .
As a result an algorithm frame can be designed for the SAA
problem that makes a step in the V space, followed by a UNewton step in order to obtain superlinear convergence. A
VU-space decomposition method for solving a constrained
nonsmooth convex program is presented in [12]. A decomposition algorithm based on proximal bundle-type method with
inexact data is presented for minimizing an unconstrained
nonsmooth convex function in [13].
In this paper, the objective function in (1) is nonsmooth,
but it has the structure which has the connection with VUspace decomposition. Based on the VU-theory, a superlinear
convergent VU-algorithm frame is designed to solve the
SAA problem. The rest of the paper is organized as follows. In
the next section, the SCMP is transformed to the nonsmooth
problem and the proof of the approximation solution set
converges to the true solution set in the sense that Hausdorff
distance is obtained. In Section 3, the VU-theory of the SAA
problem is given. In the final section, the VU-decomposition
algorithm frame of the SAA problem is designed.
(d) The moment-generating function π‘€πœ… (𝑑) = 𝐸[π‘’π‘‘πœ…(πœ‰) ] of
πœ…(πœ‰) is finite-valued for all 𝑑 in a neighborhood of zero, where
𝑀𝑠 (𝑑) = 𝐸[𝑒𝑑(𝑔(𝑠,πœ‰)βˆ’πΊ(𝑠)) ] is the moment-generating function
of the random variable 𝑔(𝑠, πœ‰) βˆ’ 𝐺(𝑠).
2. Convergence Analysis of SAA Problem
This shows that 𝑑(π‘₯̃𝑁, π‘†βˆ— ) < πœ€, which implies 𝐷(π‘₯̃𝑁, π‘ βˆ— ) <
πœ€.
In this section, we discuss the convergence of (3) to (1) as 𝑁
increases. Specifically, we investigate the fact that the solution
of the SAA problem (3) converges to its true counterpart as
𝑁 β†’ ∞. Firstly, we make the basic assumptions for SAA
method. In the following, we give the basic assumptions for
SAA method.
Assumption 1. (a) Letting 𝑋 be a set, for 𝑖 = 1, . . . , 𝑛, the limits
𝑀𝐹𝑖 (𝑑) := lim 𝑀𝐹𝑁𝑖 (𝑑)
π‘β†’βˆž
(5)
exist for every π‘₯ ∈ 𝑋.
(b) For every 𝑠 ∈ 𝑋, the moment-generating function
𝑀𝑠 (𝑑) is finite-valued for all 𝑑 in a neighborhood of zero.
(c) There exists a measurable function πœ… : Ξ© β†’ 𝑅+ such
that
σ΅„©
󡄨
σ΅„©
󡄨󡄨
󡄨󡄨𝑔 (𝑠󸀠 , πœ‰) βˆ’ 𝑔 (𝑠, πœ‰)󡄨󡄨󡄨 ≀ πœ… (πœ‰) 󡄩󡄩󡄩𝑠󸀠 βˆ’ 𝑠󡄩󡄩󡄩
(6)
σ΅„©
󡄨
σ΅„©
󡄨
for all πœ‰ ∈ Ξ© and all 𝑠󸀠 , 𝑠 ∈ 𝑋.
Theorem 2. Let π‘†βˆ— and 𝑆𝑁 denote the solution sets of (1) and
(3). Assuming that both π‘†βˆ— and 𝑆𝑁 are nonempty, then, for
any πœ€ > 0, one has 𝐷(𝑆𝑁, π‘†βˆ— ) < πœ€, where 𝐷(𝑆𝑁, π‘†βˆ— ) =
supπ‘₯βˆˆπ‘†π‘ 𝑑(π‘₯, π‘†βˆ— ).
Proof. For any points π‘₯Μƒ ∈ 𝑆𝑁 and π‘₯ ∈ 𝑅𝑛 , we have
Μƒ = max {𝑓̂𝑖𝑁 (π‘₯)
Μƒ , 𝑖 = 0, . . . , π‘š}
𝑓̂𝑁 (π‘₯)
}
{1 𝑁
Μƒ πœ‰π‘— ) , 𝑖 = 0, . . . , π‘š}
= max { βˆ‘ 𝑓𝑖 (π‘₯,
𝑁
}
{ 𝑗=1
(7)
}
{1 𝑁
≀ max { βˆ‘ 𝑓𝑖 (π‘₯, πœ‰π‘— ) , 𝑖 = 0, . . . , π‘š} .
𝑁
}
{ 𝑗=1
From Assumption 1, we know that, for any πœ€ > 0, there exist
𝑀; if 𝑁 > 𝑀, 𝑖 = 0, . . . , π‘š, then
󡄨󡄨
󡄨󡄨 𝑁
󡄨
󡄨󡄨 1
󡄨󡄨 βˆ‘π‘“ (π‘₯, πœ‰π‘— ) βˆ’ 𝐸 [𝑓 (π‘₯, πœ‰π‘— )]󡄨󡄨󡄨 < πœ€.
󡄨󡄨
󡄨󡄨
𝑖
𝑖
󡄨󡄨
󡄨󡄨 𝑁 𝑗=1
󡄨
󡄨
(8)
By letting 𝑁 > 𝑀, we obtain
}
{1 𝑁
Μƒ ≀ max { βˆ‘π‘“π‘– (π‘₯, πœ‰π‘— ) , 𝑖 = 0, . . . , π‘š}
𝑓̂𝑁 (π‘₯)
𝑁
}
{ 𝑗=1
≀ max {𝐸 [𝑓𝑖 (π‘₯, πœ‰π‘— )] + πœ€, 𝑖 = 0, . . . , π‘š}
(9)
= 𝑓 (π‘₯) + πœ€.
We now move on to discuss the exponential rate of
convergence of SAA problem (3) to the true problem (1) as
sample increases.
Theorem 3. Let π‘₯𝑁 be a solution to the SAA problem (3)
and π‘†βˆ— is the solution set of the true problem (1). Suppose
Assumption 1 holds. Then, for every πœ€ > 0, there exist positive
constants 𝑐(πœ€) and 𝑑(πœ€), such that
Prob {𝑑 (π‘₯𝑁, π‘†βˆ— ) β‰₯ πœ€} ≀ 𝑐 (πœ€) expβˆ’π‘π‘‘(πœ€)
(10)
for 𝑁 sufficiently large.
Proof. Let πœ€ > 0 be any small positive number. By Theorem 2
and
󡄨󡄨󡄨 𝑁
󡄨󡄨󡄨
󡄨󡄨 1
󡄨
󡄨󡄨 βˆ‘π‘“π‘– (π‘₯, πœ‰π‘— ) βˆ’ 𝐸 [𝑓𝑖 (π‘₯, πœ‰π‘— )]󡄨󡄨󡄨 < πœ€,
(11)
󡄨󡄨 𝑁
󡄨󡄨
󡄨󡄨 𝑗=1
󡄨󡄨
Mathematical Problems in Engineering
3
we have 𝑑(π‘₯𝑁, 𝑆) < πœ€. Therefore, by Assumption 1, we have
Prob {𝑑 (π‘₯𝑁, π‘†βˆ— ) β‰₯ πœ€}
󡄨󡄨
󡄨
󡄨
}
{󡄨󡄨󡄨󡄨 1 𝑁
𝑗
𝑗 󡄨󡄨
≀ Prob {󡄨󡄨󡄨 βˆ‘ 𝑓𝑖 (π‘₯, πœ‰ ) βˆ’ 𝐸 [𝑓𝑖 (π‘₯, πœ‰ )]󡄨󡄨󡄨 β‰₯ 𝛿}
󡄨󡄨
󡄨𝑁
󡄨
}
{󡄨󡄨 𝑗=1
Theorem 5. Suppose Assumption 4 holds. Then 𝑅𝑛 can be
decomposition at π‘₯ : 𝑅𝑛 = U βŠ• V, where
,
V = lin {βˆ‡π‘“Μ‚π‘–π‘ (π‘₯) βˆ’ βˆ‡π‘“Μ‚0𝑁 (π‘₯)}0=π‘–βˆˆπΌ(π‘₯)
ΜΈ
(12)
≀ 𝑐 (πœ€) expβˆ’π‘π‘‘(πœ€) .
The proof is complete.
3. The VU-Theory of the SAA Problem
In the following sections, we give the VU-theory, VUdecomposition algorithm frame, and convergence analysis of
the SAA problem.
The subdifferential of 𝑓̂𝑁 at a point π‘₯ ∈ 𝑅𝑛 can be
computed in terms of the gradients of the function that are
active at π‘₯. More precisely,
πœ•π‘“Μ‚π‘ (π‘₯)
⟩} = 0.
U = {𝑑 ∈ 𝑅𝑛 | βŸ¨π‘‘, {βˆ‡π‘“Μ‚π‘–π‘ (π‘₯) βˆ’ βˆ‡π‘“Μ‚0𝑁 (π‘₯)}0=π‘–βˆˆπΌ(π‘₯)
ΜΈ
(19)
Proof. The proof can be directly obtained by using
Assumption 4 and the definition of the spaces V and
U.
Given a subgradient 𝑔 ∈ πœ•π‘“Μ‚π‘ with V-component 𝑔V =
𝑇
𝑉 𝑔, the U-Lagrangian of 𝑓̂𝑁, depending on 𝑔V , is defined
by
𝑅dim U βˆ‹ 𝑒 󳨃󳨀→ 𝐿 𝑒 (𝑒; 𝑔V )
:= min {𝑓̂𝑁 (π‘₯ + π‘ˆπ‘’ + 𝑉V) βˆ’ βŸ¨π‘”V , V⟩V } .
Vβˆˆπ‘…dim V
(20)
The associated set of V-space minimizers is defined by
{
𝛼 𝑁
= Conv {𝑔 ∈ 𝑅𝑛 | 𝑔 = βˆ‘ 𝑖 βˆ‘βˆ‡π‘“π‘– (π‘₯, πœ‰π‘— ) :
𝑁 𝑗=1
π‘–βˆˆπΌ(π‘₯)
{
π‘Š (𝑒; 𝑔V )
(13)
}
𝛼 = (𝛼𝑖 )π‘–βˆˆπΌ(π‘₯) ∈ Ξ” |𝐼(π‘₯)| } ,
}
(14)
is the set of active indices at π‘₯, and
𝑠
(15)
𝑖=1
Let π‘₯ ∈ 𝑅𝑛 be a solution of (3). By continuity of the structure
functions, there exists a ball π΅πœ€ (π‘₯) βŠ† 𝑅𝑛 such that
βˆ€π‘₯ ∈ π΅πœ€ (π‘₯) ,
𝐼 (π‘₯) βŠ† 𝐼 (π‘₯) .
(16)
For convenience, we assume that the cardinality of 𝐼(π‘₯) is
π‘š1 + 1 (π‘š1 ≀ π‘š) and reorder the structure functions, so that
𝐼(π‘₯) = {0, . . . , π‘š1 }. From now on, we consider that
βˆ€π‘₯ ∈ π΅πœ€ (π‘₯) ,
𝑓̂𝑁 (π‘₯) = 𝑓̂𝑖𝑁 (π‘₯) ,
𝑖 ∈ 𝐼 (π‘₯) .
(17)
The following assumption will be used in the rest of this paper.
Assumption 4. The set
{βˆ‡π‘“Μ‚π‘–π‘ (π‘₯) βˆ’ βˆ‡π‘“Μ‚0𝑁 (π‘₯)}0=π‘–βˆˆπΌ(π‘₯)
ΜΈ
is linearly independent.
(i) the nonlinear system, with variable V and the parameter
𝑒,
𝑓̂𝑖𝑁 (π‘₯ + π‘ˆπ‘’ + 𝑉V) βˆ’ 𝑓̂0𝑁 (π‘₯ + π‘ˆπ‘’ + 𝑉V) = 0,
Ξ” 𝑠 = {𝛼 ∈ 𝑅𝑠 | 𝛼𝑖 β‰₯ 0, βˆ‘π›Όπ‘– = 1} .
(21)
Theorem 6. Suppose Assumption 4 holds. Let πœ’(𝑒) = π‘₯ + 𝑒 βŠ•
V(𝑒) be a trajectory leading to π‘₯ and let 𝐻 := βˆ‡2 𝐿 𝑒 (0, 0). Then
for all 𝑒 sufficiently small the following hold:
where
𝐼 (π‘₯) = {𝑖 ∈ 𝐼 | 𝑓̂𝑁 (π‘₯) = 𝑓̂𝑖𝑁 (π‘₯)}
:= {V : 𝐿 𝑒 (𝑒; 𝑔V ) = 𝑓̂𝑁 (π‘₯ + π‘ˆπ‘’ + 𝑉V) βˆ’ βŸ¨π‘”V , V⟩V } .
(18)
0 =ΜΈ 𝑖 ∈ 𝐼 (π‘₯)
(22)
has a unique solution V = V(𝑒) and V : 𝑅dim U β†’
𝑅dim V is a 𝐢2 function;
(ii) πœ’(𝑒) is a 𝐢2 -function with π½πœ’(𝑒) = π‘ˆ + 𝑉𝐽V(𝑒);
(iii) 𝐿 𝑒 (𝑒; 0) = 𝑓̂𝑖𝑁(π‘₯ + 𝑒 βŠ• V(𝑒)) = 𝑓̂𝑁(π‘₯ + 𝑒 βŠ• V(𝑒)) =
𝑓̂𝑁(π‘₯) + (1/2)𝑒𝑇 𝐻𝑒 + π‘œ(|𝑒|2 );
𝑖
(iv) βˆ‡πΏ 𝑒 (𝑒; 0) = 𝐻𝑒 + π‘œ(|𝑒|);
(v) 𝑓̂𝑁(πœ’(𝑒)) = 𝑓̂𝑁(πœ’(𝑒)), 𝑖 ∈ 𝐼(π‘₯).
𝑖
Proof. Item (i) follows from the assumption that 𝑓𝑖 are 𝐢2
and applying a Second-Order Implicit Function Theorem
(see [14], Theorem 2.1). Since V(𝑒) is 𝐢2 , πœ’(𝑒) is 𝐢2 and the
Jacobians 𝐽V(𝑒) exist and are continuous. Differentiating the
primal track with respect to 𝑒, we obtain the expression of
π½πœ’(𝑒) and item (ii) follows.
(iii) By the definition of 𝐿 𝑒 (𝑒; 𝑔V ) and π‘Š(𝑒; 𝑔V ), we have
𝐿 𝑒 (𝑒; 0) = 𝑓̂𝑖𝑁 (π‘₯ + 𝑒 βŠ• V (𝑒)) = 𝑓̂𝑁 (π‘₯ + 𝑒 βŠ• V (𝑒)) . (23)
4
Mathematical Problems in Engineering
According to the second-order expansion of 𝐿 𝑒 , we
obtain
𝐿 𝑒 (𝑒; 0) = 𝐿 𝑒 (0; 0)
where
βˆ‘ 𝛼𝑖 (𝑒) βˆ‡π‘“Μ‚π‘– 𝑖 (π‘₯(π‘˜) ) = 𝑔̃(π‘˜) ∈ πœ•π‘“Μ‚π‘ (π‘₯(π‘˜) )
𝑁
π‘–βˆˆπΌ(π‘₯)
1
+ βŸ¨βˆ‡πΏ 𝑒 (0; 0) , π‘’βŸ© + π‘’π‘‡βˆ‡2 𝐿 𝑒 (0; 0) 𝑒 + π‘œ (|𝑒|) .
2
(24)
Since 𝐿 𝑒 (0; 0) = 𝑓̂𝑖𝑁(π‘₯), 𝑖 ∈ 𝐼(π‘₯), βˆ‡πΏ 𝑒 (0; 0) = 0, and 𝐻 =
βˆ‡ 𝐿 𝑒 (0; 0),
2
1
𝐿 𝑒 (𝑒; 0) = 𝑓̂𝑖𝑁 (π‘₯) + 𝑒𝑇 𝐻𝑒 + π‘œ (|𝑒|2 ) .
2
Similar to (iii), we get (iv):
(25)
βˆ‡πΏ 𝑒 (𝑒; 0) = βˆ‡πΏ 𝑒 (0; 0) + βŸ¨π‘’π‘‡ βˆ‡2 𝐿 𝑒 (0; 0) , π‘’βŸ© + π‘œ (|𝑒|2 )
= 𝐻 + π‘œ (|𝑒|) .
(26)
The conclusion of (v) can be obtained in terms of (i) and the
definition of πœ’(𝑒).
4. Algorithm and Convergence Analysis
Supposing 0 ∈ πœ•π‘“Μ‚π‘(π‘₯), we give an algorithm frame which
can solve (3). This algorithm makes a step in the V-subspace,
followed by a U-Newton step in order to obtain superlinear
convergence rate.
𝑇
(π‘˜)
βŠ•0 =
is such that 𝑉 𝑔̃(π‘˜) = 0. Compute π‘₯(π‘˜+1) = π‘₯Μƒ(π‘˜) + 𝛿U
(π‘˜)
(π‘˜)
(π‘˜)
π‘₯ + 𝛿U βŠ• 𝛿V .
Step 6. Update: set π‘˜ = π‘˜ + 1 and return to Step 1.
Theorem 8. Suppose the starting point π‘₯(0) is close to π‘₯ enough
Μ‚
and 0 ∈ riπœ•π‘“(π‘₯),
βˆ‡2 𝐿 𝑒 (0; 0) ≻ 0. Then the iteration points
(π‘˜) ∞
{π‘₯ }π‘˜=1 generated by Algorithm 7 converge and satisfy
σ΅„©
σ΅„©σ΅„© (π‘˜+1)
σ΅„©
σ΅„©
σ΅„©σ΅„©π‘₯
βˆ’ π‘₯σ΅„©σ΅„©σ΅„©σ΅„© = π‘œ (σ΅„©σ΅„©σ΅„©σ΅„©π‘₯(π‘˜) βˆ’ π‘₯σ΅„©σ΅„©σ΅„©σ΅„©) .
σ΅„©
Step 0. Initialization: given πœ€ > 0, choose a starting point π‘₯(0)
close to π‘₯ enough and a subgradient 𝑔̃(0) ∈ πœ•π‘“Μ‚π‘(π‘₯(0) ) and set
π‘˜ = 0.
Step 1. Stop if
σ΅„©σ΅„© (π‘˜) σ΅„©σ΅„©
󡄩󡄩𝑔̃ σ΅„©σ΅„© ≀ πœ€.
σ΅„© σ΅„©
(27)
σ΅„© σ΅„©
σ΅„©
σ΅„©σ΅„© (π‘˜+1)
σ΅„©σ΅„©(π‘₯
βˆ’ π‘₯)V σ΅„©σ΅„©σ΅„©σ΅„© = σ΅„©σ΅„©σ΅„©σ΅„©(π‘₯(π‘˜) βˆ’ π‘₯)V σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
= π‘œ σ΅„©σ΅„©σ΅„©σ΅„©(π‘₯(π‘˜) βˆ’ π‘₯)U σ΅„©σ΅„©σ΅„©σ΅„© = π‘œ σ΅„©σ΅„©σ΅„©σ΅„©(π‘₯(π‘˜) βˆ’ π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© .
Step 3. Construct VU-decomposition at π‘₯; that is, 𝑅𝑛 = V βŠ•
U. Compute
𝑇
βˆ‡2 𝐿 𝑒 (0; 0) = π‘ˆ 𝑀 (0) π‘ˆ,
(28)
𝑀 (0) = βˆ‘ 𝛼𝑖 βˆ‡2 𝑓̂𝑖𝑁 (π‘₯) .
(29)
(33)
Since βˆ‡2 𝐿 𝑒 (0; 0) exists and βˆ‡πΏ 𝑒 (0; 0) = 0, we have from the
definition of U-Hessian matrix that
𝑇
βˆ‡πΏ 𝑒 (𝑒(π‘˜) ; 0) = π‘ˆ 𝑔̃(π‘˜)
(34)
(π‘˜)
By virtue of (30), we have βˆ‡2 𝐿 𝑒 (0; 0)(𝑒(π‘˜) + 𝛿U
) = π‘œ(‖𝑒(π‘˜) β€–U ).
2
It follows from the hypothesis βˆ‡ 𝐿 𝑒 (0; 0) ≻ 0 that βˆ‡2 𝐿 𝑒 (0; 0)
(π‘˜)
is invertible and hence ‖𝑒(π‘˜) + 𝛿U
β€– = π‘œ(‖𝑒(π‘˜) β€–U ). In
consequence, one has
(π‘₯(π‘˜+1) βˆ’ π‘₯)U = (π‘₯(π‘˜+1) βˆ’ π‘₯(π‘˜) )U
+ (π‘₯(π‘˜) βˆ’ π‘₯(π‘˜) )U + (π‘₯(π‘˜) βˆ’ π‘₯)U
Step 2. Find the active index set 𝐼(π‘₯).
(32)
(π‘˜)
Proof. Let 𝑒(π‘˜) = (π‘₯(π‘˜) βˆ’ π‘₯)U and V(π‘˜) = (π‘₯(π‘˜) βˆ’ π‘₯)V + 𝛿V
. It
follows from Theorem 6(i) that
σ΅„© σ΅„©
= 0 + βˆ‡2 𝐿 𝑒 (0; 0) 𝑒(π‘˜) + π‘œ (󡄩󡄩󡄩󡄩𝑒(π‘˜) σ΅„©σ΅„©σ΅„©σ΅„©U ) .
Algorithm 7 (algorithm frame).
(31)
σ΅„© σ΅„©
(π‘˜)
= 𝑒(π‘˜) + 𝛿U
= π‘œ (󡄩󡄩󡄩󡄩𝑒(π‘˜) σ΅„©σ΅„©σ΅„©σ΅„©U )
(35)
σ΅„©
σ΅„©
= π‘œ (σ΅„©σ΅„©σ΅„©σ΅„©π‘₯(π‘˜) βˆ’ π‘₯σ΅„©σ΅„©σ΅„©σ΅„©) .
The proof is completed by combining (33) and (35).
where
π‘–βˆˆπΌ(π‘₯)
Conflict of Interests
(π‘˜)
Step 4. Perform V-step. Compute 𝛿V
which denotes V(𝑒) in
(π‘˜)
(π‘˜)
(π‘˜)
(22) and set π‘₯ = π‘₯ + 0 βŠ• 𝛿V .
(π‘˜)
from the system
Step 5. Perform U-step. Compute 𝛿U
𝑇
𝑇
π‘ˆ 𝑀 (0) π‘ˆπ›ΏU + π‘ˆ 𝑔̃(π‘˜) = 0,
(30)
The authors declare that there is no conflict of interests
regarding the publication of this paper.
Acknowledgments
The research is supported by the National Natural Science
Foundation of China under Project nos. 11301347, 11171138,
and 11171049 and General Project of the Education Department of Liaoning Province no. L201242.
Mathematical Problems in Engineering
References
[1] B. K. Pagnoncelli, S. Ahmed, and A. Shapiro, β€œSample average
approximation method for chance constrained programming:
theory and applications,” Journal of Optimization Theory and
Applications, vol. 142, no. 2, pp. 399–416, 2009.
[2] A. Shapiro, D. Dentcheva, and A. Ruszczynski, Lecture on
Stochastic Programming: Modelling and Theory, SIAM, Philadelphia, Pa, USA, 2009.
[3] H. Xu, β€œSample average approximation methods for a class of
stochastic variational inequality problems,” Asia-Pacific Journal
of Operational Research, vol. 27, no. 1, pp. 103–119, 2010.
[4] Y. Liu, H. Xu, and J. J. Ye, β€œPenalized sample average approximation methods for stochastic mathematical programs with complementarity constraints,” Mathematics of Operations Research,
vol. 36, no. 4, pp. 670–694, 2011.
[5] S. Chen, L.-P. Pang, F.-F. Guo, and Z.-Q. Xia, β€œStochastic
methods based on Newton method to the stochastic variational
inequality problem with constraint conditions,” Mathematical
and Computer Modelling, vol. 55, no. 3-4, pp. 779–784, 2012.
[6] C. Lemarechal, F. Oustry, and C. Sagastizabal, β€œThe ULagrangian of a convex function,” Transactions of the American
Mathematical Society, vol. 352, no. 2, pp. 711–729, 2000.
[7] R. Mifflin and C. Sagastiz´abal, β€œVU-decomposition derivatives
for convex max-functions,” in Ill-Posed Variational Problems
and Regularization Techniques, M. Th´era and R. Tichatschke,
Eds., vol. 477 of Lecture Notes in Economics and Mathematical
Systems, pp. 167–186, Springer, Berlin, Germany, 1999.
[8] C. Lemar´echal and C. Sagastiz´abel, β€œMore than first-order
developments of convex functions: primal-dual relations,” Journal of Convex Analysis, vol. 3, no. 2, pp. 255–268, 1996.
[9] R. Mifflin and C. Sagastizabal, β€œOn VU-theory for functions
with primal-dual gradient strcture,” SIAM Journal on Optimization, vol. 11, no. 2, pp. 547–571, 2000.
[10] R. Mifflin and C. Sagastiz´abal, β€œFunctions with primal-dual
gradient structure and U-Hessians,” in Nonlinear Optimization
and Related Topics, G. Pillo and F. Giannessi, Eds., vol. 36 of
Applied Optimization, pp. 219–233, Kluwer Academic, 2000.
[11] R. Mifflin and C. Sagastiz´abal, β€œPrimal-dual gradient structured
functions: second-order results; links to EPI-derivatives and
partly smooth functions,” SIAM Journal on Optimization, vol.
13, no. 4, pp. 1174–1194, 2003.
[12] Y. Lu, L. P. Pang, F. F. Guo, and Z. Q. Xia, β€œA superlinear space
decomposition algorithm for constrained nonsmooth convex
program,” Journal of Computational and Applied Mathematics,
vol. 234, no. 1, pp. 224–232, 2010.
[13] Y. Lu, L.-P. Pang, J. Shen, and X.-J. Liang, β€œA decomposition algorithm for convex nondifferentiable minimization with
errors,” Journal of Applied Mathematics, vol. 2012, Article ID
215160, 15 pages, 2012.
[14] S. Lang, Real and Functional Analysis, vol. 142 of Graduate Texts
in Mathematics, Springer, New York, NY, USA, 3rd edition, 1993.
5
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Mathematical Problems
in Engineering
Journal of
Mathematics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Discrete Dynamics in
Nature and Society
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Abstract and
Applied Analysis
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014