I've got the following exercise:
In this exercise we are going to use a very simple model of the earth (land only) on which grass grows. The net rate of change in the fraction of the area of the earth covered by grass (A) is given by: dA/dt = A((1-A).G-D), where D is the death rate (a constant of 0.1 per 10 millon years). The growth rate of the grass is 0.4 per 10 millon years (G). Discretise this equation. Use the discretised equation to calculate A as a function of time. Every time step in the program corresponds to a period of 10 millon years. Run the model for 200 time units (is 2 billion years). Use a starting value for A of 0.001. Write to the screen the time at which the growth stabilizes (here defined as that the change over one timestep becomes smaller than 1% of the difference between A at the time under consideration compared to the initial value of A.
Now i got this script:
clear all
%Define variables
D=0.1;
G=0.4;
A=0.001;
dt=10E6; %timestep
timevector=[];
grassvector=[];
startloop=1;
endloop=200;
%Define the loop
for t=startloop:endloop
A=A.*((((1-A).*G)-D)) + A;
grassvector(t)=A;
timevector(t)=t*dt;
end
plot(timevector, grassvector)
This far, it seems to work fine. But I can't figure out the second part of the question. I thought it could be done with a while loop but Matlab keeps geving me errors.
clear all
D=0.1;
G=0.4;
A=0.001;
dt=10E6;
t=0;
timevector=[];
grassvector=[];
while A(t+1)-A(t) > 0.01(A(t)-A)
t=(t+1)*dt;
A=A.*((((1-A).*G)-D)) + A;
grassvector(t)=A;
timevector(t)=t*dt;
end
Can someone help? Thanks!
A(0)in MATLAB. Also there is no*in0.01(A(t)-A). And so on.