Find a problem that can be solved using Matrices and MATLAB that we have not exp
ID: 3704162 • Letter: F
Question
Find a problem that can be solved using Matrices and MATLAB that we have not explicitly discussed in this class. The problem can be online or in a textbook, and it can be solved using basic linear systems or a more advanced technique (such as eigenvalues, singular value decomposition, Gram-Schmidt algorithm, Jacobian matrices, Hessian matrices, etc.), but it should NOT have a Matlab solution in the source that you consult - providing such a solution is your job. Give a one-page write-up suitable to an undergraduate audience explaining the problem, the overview of how to solve it, and give the MATLAB code. Please don not write the " PARTICLE SWARM OPTIMIZATION PROBLEM" Give me another problem or from MATHWORKS website
Explanation / Answer
ANSWER:
PARTICLE SWARM OPTIMIZATION PROBLEM:
1)PSO simulates the behaviors of bird flocking.
* Suppose the following scenario: a group of birds is randomly searching food in an area. There is only one piece of food in the area being searched.
*All the birds do not know where the food is. But they know how far the food is in each iteration.
* what's the best strategy to find the food? The effective one is to follow the bird which is nearest to the food.
2)PSO learned from the scenario and used it to solve the optimization problems.
* In PSO, every single solution is a "bird" in the search space.
*We call it "particle". All of the particles have fitness values which are evaluated by the fitness function to be optimized and have velocities which direct the flying of the particles.
*The particles fly through the problem space by following the current optimum particles.
3)PSO is initialized with a group of random particles (solutions) .
* Then searches for optima by updating generations.
* In every iteration, each particle is updated by following two "best" values.
* The first one is the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population.
*This best value is a global best and called gbest. When a particle takes part of the population as its topological neighbors, the best value is a local best and is called lbest.
v[] = v[] + c1 rand() (pbest[] - present[]) + c2 rand() (gbest[] - present[]) (X)
present[] = persent[] + v[] (Y)
*v[] is the particle velocity, persent[] is the current particle (solution).
* pbest[] and gbest[] are defined as stated before. rand () is a random number between (0,1). c1, c2 are learning factors. usually c1 = c2 = 2.
The pseudo code of the procedure is as follows:
*For each particle
Initialize particle
END
Do:
* For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End
Choose the particle with the best fitness value of all the particles as the gBest:
* For each particle
Calculate particle velocity according to equation (a)
Update particle position according to equation (b)
End
While maximum iterations or minimum error criteria is not attained:
1.Particles' velocities on each dimension are clamped to a maximum velocity Vmax.
2.If the sum of accelerations would cause the velocity on that dimension to exceed Vmax, which is a parameter specified by the user.
3.Then the velocity on that dimension is limited to Vmax.
CODE:
function f=ofun(x)
% objective function (minimization)
of=10*(x(1)-1)^2+20*(x(2)-2)^2+30*(x(3)-3)^2;
% constraints (all constraints must be converted into <=0 type)
% if there is no constraints then comments all c0 lines below
c0=[];
c0(1)=x(1)+x(2)+x(3)-5; % <=0 type constraints
c0(2)=x(1)^2+2*x(2)-x(3); % <=0 type constraints
% defining penalty for each constraint
for i=1:length(c0)
if c0(i)>0
c(i)=1;
else
c(i)=0;
end
end
penalty=10000; % penalty on each constraint violation
f=of+penalty*sum(c); % fitness function
tic
clc
clear all
close all
rng default
LB=[0 0 0]; %lower bounds of variables
UB=[10 10 10]; %upper bounds of variables
% pso parameters values
m=3; % number of variables
n=100; % population size
wmax=0.9; % inertia weight
wmin=0.4; % inertia weight
c1=2; % acceleration factor
c2=2; % acceleration factor
% pso main program
maxrun=10; % set maximum number of runs need to be
for run=1:maxrun
run
% pso initialization
for i=1:n
for j=1:m
x0(i,j)=round(LB(j)+rand()*(UB(j)-LB(j)));
end
end
x=x0; % initial population
v=0.1*x0; % initial velocity
for i=1:n
f0(i,1)=ofun(x0(i,:));
end
[fmin0,index0]=min(f0);
pbest=x0; % initial pbest
gbest=x0(index0,:); % initial gbest
% pso initialization
% pso algorithm
ite=1;
tolerance=1;
while ite<=maxite && tolerance>10^-12
w=wmax-(wmax-wmin)*ite/maxite; % update inertial weight
% pso velocity updates
for i=1:n
for j=1:m
v(i,j)=w*v(i,j)+c1*rand()*(pbest(i,j)-x(i,j)) +c2*rand()*(gbest(1,j)-x(i,j));
end
end
% pso position update
for i=1:n
for j=1:m
x(i,j)=x(i,j)+v(i,j);
end
end
% handling boundary violations
for i=1:n
for j=1:m
if x(i,j)<LB(j)
x(i,j)=LB(j);
elseif x(i,j)>UB(j)
x(i,j)=UB(j);
end
end
end
% evaluating fitness
for i=1:n
f(i,1)=ofun(x(i,:));
end
% updating pbest and fitness
for i=1:n
if f(i,1)<f0(i,1)
pbest(i,:)=x(i,:);
f0(i,1)=f(i,1);
end
end
[fmin,index]=min(f0); % finding out the best particle
ffmin(ite,run)=fmin; % storing best fitness
ffite(run)=ite; % storing iteration count
% updating gbest and best fitness
if fmin<fmin0
gbest=pbest(index,:);
fmin0=fmin;
end
% calculating tolerance
if ite>100;
tolerance=abs(ffmin(ite-100,run)-fmin0);
end
% displaying iterative results
if ite==1
disp(sprintf('Iteration Best particle Objective fun'));
end
disp(sprintf('%8g %8g %8.4f',ite,index,fmin0));
ite=ite+1;
end
% pso algorithm
gbest;
fvalue=10*(gbest(1)-1)^2+20*(gbest(2)-2)^2+30*(gbest(3)-3)^2;
fff(run)=fvalue;
rgbest(run,:)=gbest;
disp(sprintf('--------------------------------------'));
end
% pso main program
disp(sprintf(' '));
disp(sprintf( ));
disp(sprintf('Final Results'));
[bestfun,bestrun]=min(fff)
best_variables=rgbest(bestrun,:)
disp(sprintf( ));
toc
% PSO convergence characteristic
plot(ffmin(1:ffite(bestrun),bestrun),'-k');
xlabel('Iteration');
ylabel('Fitness function value');
title('PSO convergence characteristic')
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.