-
Notifications
You must be signed in to change notification settings - Fork 1
Exam 2
Exercise 2. Consider the following optimization problem:
clear global;
close all;
clear;
clc;
matlab.lang.OnOffSwitchState = 1;
To determine convexity, we need to examine the Hessian Matrix, the second partial derivates of the objective function with respect to the variables. On the next 1.2 we can check that our Hessian Matrix is strongly convex and we know that in any convex optimization problem the feasible region is a convex set.
Using the Theorem weierstrass that says if the objective funciton is continuous and the feasible region is closed and bounded, then at least a global optimum exists.
On this problem using Collory 3 were the objective function is continuous and coercive and the feasible region is closed, then at least a global minimum exists. Coercivity is the
% In this case we use 1 variable, because is an approximation calculation
objective_function = @(x) 2*x^2 + x*x + x^2 + 2*x*x + 3*x^2 + x*x + 2*x^2 - 5*x - 4*x + 3*x;
syms x
limit = limit(objective_function(x), x, Inf)
Output
limit =
Since the objective function is coercive, there exists a global optimum.
We need to analyze the convexity of each term and the objective function as a whole. A function is convex if the Hessian Matrix is positive semi-definite.
In this case the objective function is:
To create the Hessian Matrix in a quick form we have:
Q = [4 0 1 0
0 2 2 0
1 2 6 1
0 0 1 4];
fprintf('Eigen Values Objective Function:');
display(eig(Q))
fprintf('Determinant: %i', det(Q));
Output
Eigen Values Objective Function:
1.0608
3.5933
4.0000
7.3460
Determinant: 112
Since all the eigenvalues are positive and not zero, also the Determinant of the Hessian matrix positive definite, we can conclude that Hessian Matrix is positive definite, therefore is a strongly convex optimization problem.
Starting from the point
The optimization problem is unconstraint and the objective function is a quadratic, where Q is positive definite, as follows:
Pseudocode: The steepest descent direction
% Gradient method
Choose x0 tolerance=ie-6 i=0
while (norm(g) > tolerance)
[step direction] d = -g;
[step size] alpha = -(g'*d)/d'Qd;
x = x + alpha*d;
i = i + 1;
end
q = [-5
0
-4
3];
x0 = [0
0
0
0];
tolerance = 1e-6;
% Starting Point
x = x0;
i = 0;
g = Q*x + q;
fprintf('Gradient Method with exact line search');
fprintf('i \t f(x) \t \t \t ||grad f(x)|| \n');
while (norm(g)>tolerance)
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
fprintf('%i \t %2.10f \t \t %2.10f \n',i,v,norm(g));
% Direction
d = -g;
% Step size
alpha = -(g'*d)/(d'*Q*d);
% New point
x = x + alpha*d;
% Next iteration
i = i+1;
end
Output
Gradient Method with exact line search
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119
1 -5.0403225806 2.4685851395
2 -5.5976694222 1.1368021920
3 -5.7883000924 0.9808455712
4 -5.8850831815 0.6005067114
5 -5.9375039954 0.5321261507
. . .
. . .
. . .
45 -6.0000000000 0.0000027326
46 -6.0000000000 0.0000016771
47 -6.0000000000 0.0000014863
48 -6.0000000000 0.0000009122
With Gradient Method using exact line search we need 48 iterations to arrive to the best solution.
Starting from the point (0, 0, 0, 0). How many iterations are needed? Write the point found by the method at any iteration. With this method, the search direction involves the gradient computed at the current iteration and the direction computed at the precious iteration.
The optimization problem is unconstrained and the objective function is a quadratic, where Q is positive definite, as follows:
Pseudocode: Conjugate gradient method for quadratic functions with linear search
% Conjugate method
Choose x0 i=0
while g > 0
if i = 0
d = -g;
else
beta = norm(g)^2 / norm(g_prev)^2;
d = -g + beta*d_prev;
end
alpha = norm(g)^2 / (d'Q*d);
x = x + alpha*d;
g = Qx + q;
i = i + 1;
end
clear x v g
x0 = [0
0
0
0];
x = x0;
i = 0;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
fprintf("\n \n Conjugate Gradient with exact line search \n");
fprintf("i \t f(x) \t \t \t ||grad f(x)|| ");
while (norm(g)>tolerance)
fprintf('%i \t %2.10f \f %2.10f \n', i, v, norm(g));
if i == 0
% Direction on the first iteration
d = -g;
else
beta = (norm(g)^2) / (norm(g_prev)^2);
% Directio
d = -g + beta*d_prev;
end
% Step Size
alpha = (norm(g)^2) / (d'*Q*d);
x = x + alpha*d;
d_prev = d;
g_prev = g;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
i = i + 1;
end
Output
Conjugate Gradient with exact line search
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119
1 -5.0403225806 2.4685851395
2 -5.6669616780 0.9277888981
3 -5.9998774891 0.0296673867
On this case, we can easily see that the convergence rate is quicklier than the previous method, with Conjugate Method using exact line search we use just 3 iterations to found the minimum.
Starting from the point (0, 0, 0, 0). How many iterations are needed? We can conclude an objective function is strongly convex because the Hessian of the function is positive definite, and we get the global convergence because the direction is a descent direction:
Pseudocode: Newton method basic version
% Newton method
Choose x0 i=0
while (g ~= 0)
[search direction] d = -H\g;
x = x + d
i = i + 1;
end
x0 = [0
0
0
0];
i = 0;
x = x0;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
fprintf("\n \n Newton method basic version \n");
fprintf("i \t f(x) \t \t \t ||grad f(x)|| \n");
while (norm(g) > tolerance)
fprintf("%i \t %2.10f \t \t %2.10f \n", i, v, norm(g));
d = -Q\g;
x = x + d;
i = i + 1;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
end
Output
Newton method basic version
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119
Pseudocode: Newton Armijo inexact line search
% Newton method Armijo inexact line search
Choose x0 i=0
Set alpha gamma tbar
while (g~=0)
[search direction] d = -H\g;
t = tbar;
while f(x + t*d) > f(x) + alpha*t*d'*g
t = gamma*t;
end
x = x + t*d
i = i + 1;
end
clear x v g H
x0 = [0
0
0
0];
i = 0;
x = x0;
tbar = 1;
alpha = 0.1;
gamma = 0.9;
[v, g, H] = f(x, Q, q);
fprintf("\n \n Newton method Armijo inexact line search \n");
fprintf("i \t f(x) \t \t \t ||grad f(x)|| \n");
fprintf("%i \t %2.10f \t \t %2.10f \n", i, v, norm(g));
while (norm(g) < tolerance)
% search direction
d = -H\g;
t = tbar;
% Armijo inexact line search
while f(x+t*d, Q,q) > v+alpha*t*d'*g
t = gamma * t;
end
x = x + t*d;
i = i + 1;
[v,g,H] = f(x, Q, q);
end
function [v, g, H] = f(x, Q, q)
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
H = Q;
end
Output
Newton method Armijo inexact line search
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119