Standard name for the computation of Lagrange multipliers iteratively by fixing other multipliers?
up vote
0
down vote
favorite
Dear Optimization Experts,
Background:
I have a convex optimization problem on hand that can be shown in general form as given below
begin{equation}
begin{aligned}
& underset{x in mathbb{R}^N}{text{minimize}}
& & f(x) \
& text{subject to}
& & g_i(x) - alpha_i leq 0 forall i = 1,cdots,K ; ,
end{aligned}
end{equation}
where both functions $g_i: mathbb{R}^N rightarrowmathbb{R}$ and $f: mathbb{R}^N rightarrowmathbb{R}$ are convex, and $alpha_i in mathbb{R}$ is given.
The Lagrangian is:
begin{align}
Lleft(x, left{lambda_iright}right)
&= f(x) + sum limits_{i=1}^{K} lambda_i left(g_i(x) - alpha_i right) ; .
end{align}
Question:
If $K=1$ then I can obtain the closed-form solution $x$ (and analytical solution of the Lagrange multiplier $lambda_1$) by following the KKT conditions.
Now, the question arises when $K > 1$ then I can't obtain the closed-form solution, but I can compute $x$ analytically which is dependent on all the $lambda_i$. So, to compute the Lagrange multiplier say $lambda_i$, I resort to iterative solution where I fix other Lagrange multipliers ($lambda_j forall j = 1,cdots,K$ except $i$). Then repeat the above process for other Lagrange multipliers iteratively.
- do you have any standard name for such scheme to compute Lagrange multipliers cyclically?
- If not, can I say that this cyclic/iterative scheme is nothing but Coordinate Descent (or like)?
Thank you so much for your time in advance.
optimization convex-optimization numerical-optimization
add a comment |
up vote
0
down vote
favorite
Dear Optimization Experts,
Background:
I have a convex optimization problem on hand that can be shown in general form as given below
begin{equation}
begin{aligned}
& underset{x in mathbb{R}^N}{text{minimize}}
& & f(x) \
& text{subject to}
& & g_i(x) - alpha_i leq 0 forall i = 1,cdots,K ; ,
end{aligned}
end{equation}
where both functions $g_i: mathbb{R}^N rightarrowmathbb{R}$ and $f: mathbb{R}^N rightarrowmathbb{R}$ are convex, and $alpha_i in mathbb{R}$ is given.
The Lagrangian is:
begin{align}
Lleft(x, left{lambda_iright}right)
&= f(x) + sum limits_{i=1}^{K} lambda_i left(g_i(x) - alpha_i right) ; .
end{align}
Question:
If $K=1$ then I can obtain the closed-form solution $x$ (and analytical solution of the Lagrange multiplier $lambda_1$) by following the KKT conditions.
Now, the question arises when $K > 1$ then I can't obtain the closed-form solution, but I can compute $x$ analytically which is dependent on all the $lambda_i$. So, to compute the Lagrange multiplier say $lambda_i$, I resort to iterative solution where I fix other Lagrange multipliers ($lambda_j forall j = 1,cdots,K$ except $i$). Then repeat the above process for other Lagrange multipliers iteratively.
- do you have any standard name for such scheme to compute Lagrange multipliers cyclically?
- If not, can I say that this cyclic/iterative scheme is nothing but Coordinate Descent (or like)?
Thank you so much for your time in advance.
optimization convex-optimization numerical-optimization
1
Depending on how you select $lambda$, this may be coordinate descent in the dual problem.
– LinAlg
Nov 18 at 13:53
thank you LinAlg!
– user550103
Nov 18 at 15:51
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Dear Optimization Experts,
Background:
I have a convex optimization problem on hand that can be shown in general form as given below
begin{equation}
begin{aligned}
& underset{x in mathbb{R}^N}{text{minimize}}
& & f(x) \
& text{subject to}
& & g_i(x) - alpha_i leq 0 forall i = 1,cdots,K ; ,
end{aligned}
end{equation}
where both functions $g_i: mathbb{R}^N rightarrowmathbb{R}$ and $f: mathbb{R}^N rightarrowmathbb{R}$ are convex, and $alpha_i in mathbb{R}$ is given.
The Lagrangian is:
begin{align}
Lleft(x, left{lambda_iright}right)
&= f(x) + sum limits_{i=1}^{K} lambda_i left(g_i(x) - alpha_i right) ; .
end{align}
Question:
If $K=1$ then I can obtain the closed-form solution $x$ (and analytical solution of the Lagrange multiplier $lambda_1$) by following the KKT conditions.
Now, the question arises when $K > 1$ then I can't obtain the closed-form solution, but I can compute $x$ analytically which is dependent on all the $lambda_i$. So, to compute the Lagrange multiplier say $lambda_i$, I resort to iterative solution where I fix other Lagrange multipliers ($lambda_j forall j = 1,cdots,K$ except $i$). Then repeat the above process for other Lagrange multipliers iteratively.
- do you have any standard name for such scheme to compute Lagrange multipliers cyclically?
- If not, can I say that this cyclic/iterative scheme is nothing but Coordinate Descent (or like)?
Thank you so much for your time in advance.
optimization convex-optimization numerical-optimization
Dear Optimization Experts,
Background:
I have a convex optimization problem on hand that can be shown in general form as given below
begin{equation}
begin{aligned}
& underset{x in mathbb{R}^N}{text{minimize}}
& & f(x) \
& text{subject to}
& & g_i(x) - alpha_i leq 0 forall i = 1,cdots,K ; ,
end{aligned}
end{equation}
where both functions $g_i: mathbb{R}^N rightarrowmathbb{R}$ and $f: mathbb{R}^N rightarrowmathbb{R}$ are convex, and $alpha_i in mathbb{R}$ is given.
The Lagrangian is:
begin{align}
Lleft(x, left{lambda_iright}right)
&= f(x) + sum limits_{i=1}^{K} lambda_i left(g_i(x) - alpha_i right) ; .
end{align}
Question:
If $K=1$ then I can obtain the closed-form solution $x$ (and analytical solution of the Lagrange multiplier $lambda_1$) by following the KKT conditions.
Now, the question arises when $K > 1$ then I can't obtain the closed-form solution, but I can compute $x$ analytically which is dependent on all the $lambda_i$. So, to compute the Lagrange multiplier say $lambda_i$, I resort to iterative solution where I fix other Lagrange multipliers ($lambda_j forall j = 1,cdots,K$ except $i$). Then repeat the above process for other Lagrange multipliers iteratively.
- do you have any standard name for such scheme to compute Lagrange multipliers cyclically?
- If not, can I say that this cyclic/iterative scheme is nothing but Coordinate Descent (or like)?
Thank you so much for your time in advance.
optimization convex-optimization numerical-optimization
optimization convex-optimization numerical-optimization
edited Nov 18 at 8:27
asked Nov 18 at 8:12
user550103
8911315
8911315
1
Depending on how you select $lambda$, this may be coordinate descent in the dual problem.
– LinAlg
Nov 18 at 13:53
thank you LinAlg!
– user550103
Nov 18 at 15:51
add a comment |
1
Depending on how you select $lambda$, this may be coordinate descent in the dual problem.
– LinAlg
Nov 18 at 13:53
thank you LinAlg!
– user550103
Nov 18 at 15:51
1
1
Depending on how you select $lambda$, this may be coordinate descent in the dual problem.
– LinAlg
Nov 18 at 13:53
Depending on how you select $lambda$, this may be coordinate descent in the dual problem.
– LinAlg
Nov 18 at 13:53
thank you LinAlg!
– user550103
Nov 18 at 15:51
thank you LinAlg!
– user550103
Nov 18 at 15:51
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3003249%2fstandard-name-for-the-computation-of-lagrange-multipliers-iteratively-by-fixing%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Depending on how you select $lambda$, this may be coordinate descent in the dual problem.
– LinAlg
Nov 18 at 13:53
thank you LinAlg!
– user550103
Nov 18 at 15:51