# Introduction to the Autoregressive Integrated Moving Average (ARIMA) model

A time series refers to the sequence of data recorded over regular time intervals. It could be hourly, daily, monthly, quarterly, or yearly. Analyses of time series data provide meaningful information about the characteristics of the dataset and even help in predicting future movements by understanding past observations. With a growing concern about uncertainties in the market, the need for forecasting trends has increased. One of the markets wherein the building of an accurate model is essential for tracking future trends in the stock market. As stock price data is of huge volumes witnessing change on a daily basis, investors always seek the optimal way to forecast trends to maximize profit and minimize risk. Among various fundamental and technical means of examining the stock market, Autoregressive Integrated Moving Average (ARIMA) is a statistical tool with a standard structure which is simpler and provides skilful information about the stock market. Thus, focusing on the contribution of **ARIMA**, this article discusses the forecasting capability of the model.

## Concept of Autoregressive Integrated Moving Average (ARIMA) model

In 1970, Box and Jenkins introduced the concept of ** Autoregressive Integrated Moving Average (ARIMA)** as a methodology for identifying, diagnosing, and estimating time-series data.

**ARIMA**model can be used in different fields such as in the prediction of weather or sales, but financial forecasting is the most prominent field for effective results. Representing the future value of the variable as the linear combination of the past values and errors, a prediction based on the

**ARIMA**model has outperformed complex structural models (Adebiyi et al., 2014). The autoregressive model could be represented as

y_{t }= ϕ_{0} + ϕ_{1}y_{t−1 }+ ϕ_{2}y_{t−2 }+ ⋯ + ϕ_{p}y_{t−p }+ ε_{t }– ϴ_{1}ε_{t-1}– ϴ_{2}ε_{t-2}-…- ϴ_{q}ε_{q-1}

Wherein, ε_{t }represents the past error value, Y_{t} is the actual value, ϕ & ϴ are the coefficients, and p and q are referred to as the autoregressive and moving average.

**ARIMA** being the autoregressive integrated moving average consist of three main parameters i.e.

### Autoregressive (AR or p)

Stating the relationship of a variable with its own lagged values, autoregressive order represents the number of lagged values that are connected to the current value of the variable.

y_{t }= ϕ_{1}y_{t−1} + … + ϕ_{p}y_{t−p} + e_{t} **OR** y_{t }= ϴ + ϕ_{1}y_{t−1} + … + ϕ_{p}y_{t−p} + e_{t}

Where y_{t} is the current value, y_{t−1} is the lagged value of variable y, e defines the error term, is constant or drift, and p determines the number of period lag.

### Integrated (I or d)

Integration of the variable state the degree of differencing required for converting the non-stationary form of the time series into the stationary one by removing the effect of seasonality or irregular events.

### Moving average (MA or q)

Being the representative of the relationship between the observations and the residual error, the size of the moving average represents the status of dependency.

y_{t }= e_{t} + ϴ_{1}e_{t−1} + … + ϴ_{q}e_{t−q} **OR** y_{t }= α + e_{t} + ϴ_{1}e_{t−1} + … + ϕ_{p}y_{t−q}

Where, y_{t} is the current value, e is the residual term, q is the number of moving average, and is the constant term.

## Why use **Autoregressive Integrated Moving Average (ARIMA)** for forecasting?

**ARIMA** model along with the integration also consists of the autoregressive and moving average. ‘AR’ helps in determining the change since the last time, ‘MA’ smoothens the trend in data, and ‘I’ removes the non-stationary form of series. With the possibility of deriving the optimal model by changing the number of lags in each aspect, the **ARIMA** model works as a more statistical model in prediction compared to other methods like linear regression or exponential smoothing. As time series is dynamic in nature, thus for having prediction it is essential that the selected model should be flexible and could adjust as per the requirement. Hence, **ARIMA** with its flexibility and smoothness captures the different natures of the data in one model.

## Assumptions for the applicability of the **ARIMA** model

In order to fit the **ARIMA** model for future predictions, a time series should satisfy the below-stated assumptions (Subhasree, 2018):

- The time series used for the analysis should be stationary i.e. the properties of the series should be dependent and influenced by time. White noise series or series exhibiting cyclical behaviour only should be considered stationary.
- The data should be univariate. In order to make the prediction based on past values, the single variable data should be considered for framing the model.
- Bound of stationary i.e. absolute value of in AR should be less than 1 (-1<<1). For if a bound of stationery does not exist, the series is not autoregressive and could be either trending or drifting (McCleary & Hay, 1980).
- Bound of invertibility i.e. absolute value of in MV should be less than 1 (-1<<1) as not the existence of value in the limit would lead to having non-stationary series(McCleary & Hay, 1980).

## Rules for identifying the **Autoregressive Integrated Moving Average (ARIMA)** model

For building a robust ** Autoregressive Integrated Moving Average (ARIMA)** model, it is essential to identify the optimal number of lags, differencing, and the moving average size. Below stated rules should be followed to identify the optimal order (Nau, 2014).

- Model having no order of differencing consist of the constant term, one order differencing include a constant term for non-zero average trend series, and the two-order differencing does not include a constant term.
- A model with no differencing specifies that the original time series is stationary, a series with one order differencing has a constant average trend, and a series of two orders have a time-varying trend.
- The optimal order of differencing is often the one where the standard deviation of the series is the lowest.
- A series has a very high number of lags i.e. more than 10 and positive autocorrelation needs the differencing at higher order for an optimal model.
- For the series with lag 1 and the autocorrelation 0 or negative, no higher-order differencing is required while for the series with autocorrelation even less than -0.5, there is a possibility of over differencing.
- In case the partial autocorrelation function (PACF) shows a sharp cut-off or there is positive lag-1 autocorrelation, there should be an addition of one or more AR terms if the series is under differenced.
- If the autocorrelation function (ACF) shows a sharp cut-off or negative lag-1 autocorrelation, then MA should be added to the model in case of over differenced series.
- As AR and MA cancel out each other’s effects, thus a mix of AR-MA could be used with fewer MA and AR terms.
- In case the AR coefficient sum is almost 1, there should be a reduction in the AR terms by 1 and an increase in differencing by 1.
- In case the MA coefficient sum is almost 1, MA terms should be reduced by 1 and increase differencing by 1.
- There is an existence of a unit root (coefficient sum as 1) for MA or AR if the long-term forecasts are unstable or erratic.
- Order of seasonal differencing should be used in case of having a series that has consistent and strong seasonal patterns.
- If autocorrelation is appropriately differenced at positive lag s (number of periods in a season) – add s AR term in the model while in case of negative lag s, add s MA term.

## Steps for building the **Autoregressive Integrated Moving Average (ARIMA)** model

There are certain basic steps used for fitting the ** Autoregressive Integrated Moving Average (ARIMA)** model to the time series (McCleary & Hay, 1980):

### Step 1: Plotting the data

Initially, the data of the variable is plotted against the time in order to inspect the features of the graph and identify the unusualness and determine the stationary/ seasonality presence in the series.

### Step 2: Transforming the data

Once the series characteristics are determined, the data is transformed like the natural log transformation is done in order to minimize the level of standard deviation.

### Step 3: Identifying the orders of the model and estimating model

In case the series is stabilized by transformation, the **Autoregressive Integrated Moving Average (ARIMA)** model is fitted else the orders of AR, I, and MA are identified. Akaike’s information criterion (AIC) or Schwarz’s Bayesian Information criterion (SBIC) criteria could be used for determining the optimal and most effective number of parameters (i.e. p+q). The time series is plotted using the differencing, correlogram, and partial correlogram for determining the order. Based on the number of lags and orders identified, the

**ARIMA**model is estimated.

### Step 4: Residual Diagnostic

Graph (Q-Q plot or histogram), statistics, ACF or PACF; the validity of the model is identified. If the model is bad, Steps 1-3 is repeated else predictions are made with the model.

## Characteristics of a good forecasting model

Forecasting is dependent on the model used for the prediction. In order to have an effective prediction, it is essential to formulate a good forecasting model. Below stated are the characteristics that a good model should possess

- The model should fit the past data well and the adjusted R2 value should be high.
- Mean Absolute Percentage Error (MAPE) should be good.
- The Relative Standard Error (RSE) of the selected model should be low compared to other models.
- The plot of the actual line should fit well with the predicted observations.
- Forecasting of future observations even in the withheld data should be good.
- No significant patterns should be left in PACF and ACF.
- It should be effective but simple with not too many coefficients (the model should be parsimonious).
- The estimated coefficients are the model should be statistically significant and not redundant.
- Residual should be white noise.
- The model should be invertible and stationery.

#### References

- Adebiyi, A. A., Adewumi, A. O., & Ayo, C. K. (2014). Stock price prediction using the ARIMA model.
*Proceedings – UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, UKSim 2014*, 106–112. https://doi.org/10.1109/UKSim.2014.67 - McCleary, R., & Hay, R. A. (1980). Univariate ARIMA Models.
*Applied Time Series Analysis for the Social Sciences*, 1–12. - Nau, R. (2014).
*Rules for identifying ARIMA models*. https://people.duke.edu/~rnau/arimrule.htm - Subhasree, C. (2018). Time Series Analysis Using ARIMA Model In R. In
*Programming in R*(pp. 1–15). https://datascienceplus.com/time-series-analysis-using-arima-model-in-r/

I am a management graduate with specialisation in Marketing and Finance. I have over 12 years' experience in research and analysis. This includes fundamental and applied research in the domains of management and social sciences. I am well versed with academic research principles. Over the years i have developed a mastery in different types of data analysis on different applications like SPSS, Amos, and NVIVO. My expertise lies in inferring the findings and creating actionable strategies based on them.

Over the past decade I have also built a profile as a researcher on Project Guru's Knowledge Tank division. I have penned over 200 articles that have earned me 400+ citations so far. My Google Scholar profile can be accessed here.

I now consult university faculty through Faculty Development Programs (FDPs) on the latest developments in the field of research. I also guide individual researchers on how they can commercialise their inventions or research findings. Other developments im actively involved in at Project Guru include strengthening the "Publish" division as a bridge between industry and academia by bringing together experienced research persons, learners, and practitioners to collaboratively work on a common goal.

I am a Senior Analyst at Project Guru, a research and analytics firm based in Gurugram since 2012. I hold a master’s degree in economics from Amity University (2019). Over 4 years, I have worked on worked on various research projects using a range of research tools like SPSS, STATA, VOSViewer, Python, EVIEWS, and NVIVO. My core strength lies in data analysis related to Economics, Accounting, and Financial Management fields.

## Discuss