I have been reading and coding AR(Autoregressive Model) and Moving Average(MA) basics.
First, I am very grateful for all the patience by Claudio Bustos to help me with understanding this. There is still lot to be learnt in these models, but I will keep bugging him. :D
We wrote the simulations for AR(p)
and MA(q)
. The idea behind creating them is to first simulate the process with known coefficients and then move on to write ARMA process which could also estimate the coefficients.
Autoregressive Model
This model is represented by AR(p) where p is the order of this model. For a pure autoregressive model, we consider order of moving average model as 0.
AR(p)
is represented by the following equation:
Here, are the parameters of model, and is the error noise.
To realize this model, we have to observe and keep track of previous x(t)
values, and realize current value with observed summation and error noise.
Here is that code fragment of general AR(p) simulator:
 It creates a buffer initially to prevent ‘broken values’, and trims that buffer before returning the result.
 It goes on to create the backshifts vector. The backshift vector depends on the ongoing iteration from
(1..n)
.  After creating backshifts vector, it extracts the required
phi
values from the stack.  Then it performs vector multiplication 
backshifts * parameters
, then adds the result.  The current
x(t)
values is then returned with active white noise.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 

There are certain tests which will now be performed in context of acf
and pacf
, which I earlier coded. They form the basis of correctness of our autoregressive estimation. :) Expect the next post about those tests.
Moving Average Model
This model is represented by MA(q), where q
is the order of this model. Again, for pure movingaverage model, we consider p=0
.
This is represented by following equation:
Unlike autoregressive model, this model was somewhat hard to obtain. It needs to obsere previous error noise instead of previous x(t)
values. And the series largely depends upon the order of the model.
It’s code can also be found on my github branch.
I am now currently working on those tests analysis, I mentioned; for various combinations of AR(p)
and MA(q)
. As they get over, I will move to realize few algorithms like yule walker and burgs for estimation of coefficients.
Cheers,
Ankur Goel