« Home | Oracle Data Mining JDeveloper Extension on OTN » | What Is Data Mining? » | Analytics in the Oracle Database » | Real Time Scoring & Model Management Series » | Categories » | Time Series Forecasting Series » | Welcome!! » | All Posts » 

Monday, January 23, 2006 

Time Series Forecasting - Part 1

This is Part 1 in a series on time series forecasting - The full series is Part 1, Part 2, and Part 3.

Time series forecasting is supported in the Oracle Database by Oracle OLAP FORECAST command and by Oracle Data Mining (ODM). The FORECAST command can be used to forecast data by one of three methods: straight-line trend, exponential growth, or Holt-Winters extrapolation. FORECAST performs the calculation according to the selected method and optionally stores the result in a variable in your analytic workspace. The first two methods are simple extrapolation techniques. The Holt-Winters forecasting method is more sophisticated. It is a type of exponential smoothing or moving average technique. The Holt-Winters method constructs three statistically related series, which are used to make the actual forecast. These series are:

  1. The smoothed data series, which is the original data with seasonal effects and random error removed
  2. The seasonal index series, which is the seasonal effect for each period. A value greater than one represents a seasonal increase in the data for that period, and a value less than one is a seasonal decrease in the data
  3. The trend series, which is the change in the data for each period with the seasonal effects and random error removed
The methods supported by the FORECAST command are "univariate time series" methods. That is, they can only be used to model a time series that consists of single (scalar) observations recorded sequentially over equal time increments. The FORECAST command methods are also linear in nature and cannot capture complex relationships between inputs and outputs.

ODM, through its Support Vector Machine (SVM) regression functionality, provides a powerful non-linear technique for time series forecast that could include other variables besides the series itself and captures complex relationships. The rest of this post covers the data mining approach to time series modeling. This post is part of a two-post series. In the next post I will give an example of time series forecasting using ODM and the approach described below.

Data Mining Approach

ODM SVM regression supports modeling of time series via a time delay or lag space approach. This approach is also called "state-space reconstruction" in the physics community and "tapped delay line" in the engineering community. In its simplest form, past values of the target (the time series we want to forecast) are used as inputs (predictors) to the model. These inputs are called lagged variables and can be easily computed using the SQL LAG analytic function. Other attributes that are also relevant for forecasting the series can be added in the same fashion. Suppose we are trying to forecast the maximum daily electrical load based on electrical load values and average daily temperatures. Following the above approach, for a given date, we could use, for example, the load values and the average daily temperatures for the previous two days as inputs. This is illustrated in the table below where the lagged values are computed using the SQL LAG analytic function. The data shows maximum load values (Y) and average daily temperatures (X) for 10 days.

DAY Y LAG(Y,1) LAG(Y,2) X LAG(X,1) LAG(X,2)
1 797 . . -7.6 . .
2 777 797 . -6.3 -7.6 .
3 797 777 797 -3.0 -6.3 -7.6
4 757 797 777 0.7 -3.0 -6.3
5 707 757 797 -1.9 0.7 -3.0
6 730 707 757 -6.0 -1.9 0.7
7 818 730 707 -6.2 -6.0 -1.9
8 818 818 730 -3.9 -6.2 -6.0
9 803 818 818 -6.3 -3.9 -6.2
10 804 803 818 -1.1 -6.3 -3.9

In some cases, the values for auxiliary attributes (X), like the average temperature in the example above, would not be known at the time we are trying to forecast the target (Y) and would therefore not be included among the inputs. However, we would still be able to use the lagged values of X. Once the attributes have been selected, we can train the SVM model with these target (Y) and predictor attributes, in the example above the predictors are: LAG(Y,1), LAG(Y,2), X, LAG(X,1), LAG(X,2). The data would be split into training and test data sets. Usually one would train on earlier dates and test on later ones. Alternatively, for one-step ahead forecast testing (more on this below), the training and test data sets can be randomly selected from all available data. A SVM regression model that the only inputs are lagged target (Y) values is called an "autoregressive model." The input space that includes all of the lagged variables is called the "embedding space."

Things get a bit more complicated if the data rows in the time series are not equally spaced, that is, the time interval between observations is not the same. One approach is to use a smoothing technique to compute values for the attributes at equally spaced time intervals, and then use the interpolated values for training instead of the original data.

Methodology

When modeling time series, following the above approach, it is necessary to make decisions regarding:
  1. Trend removal
  2. Target transformation
  3. Lagged attribute selection
These decisions are required for the majority of time series forecasting techniques.

Trend Removal

A key fact for the above time delay approach to be effective is the assumption that the time series is stationary. This implies that the statistical distribution of the time series values at the various time intervals is the same. In particular, this means that the time series does not have a trend. In practice, many time series exhibit trend. For example, many financial indicators, such as stock prices, usually go up over time. A trend component in the time series means that the series values tend to go up over time, or that the series values tend to go down over time. The simplest method is called differencing and is the standard statistical method for handling nondeterministic (stochastic) trends. In this case, instead of using Y (the time series value) as a target, we use the difference D = Y-LAG(Y,1) for the target. The same applies to the target lagged values. For example, instead of using LAG(Y,1) as a predictor we would use LAG(Y,1)-LAG(Y,2). Sometimes it is necessary to compute differences of differences. At apply time the differencing of the target can be reversed to obtain forecasts for the original series.

Target Transformation
It is usually useful to normalize the target for SVM regression. This helps speed up the algorithm convergence. For time series problems, the target should be normalized prior to the creation of the lagged variables.

Lagged Attribute Selection
You can either select the lags by analyzing the data (compute correlogram and cross-correlations) or by selecting a window size. For example, if we use a window of size 2 we would include LAG(Y,1) and LAG(Y,2) as predictors, where Y is the target attribute. Some care is needed in choosing the window size. The window size directly affects the pattern recognition capability of the SVM algorithm. It limits the size of the patterns that can be recognized. If the window is too small we might not have enough information to capture the dynamics of the system underlying the time series data. Different patterns may look the same as only a small fraction of the pattern is revealed by the lagged attributes. If the window is too large, the extra lagged attributes will add noise and make the problem harder to solve.

Computing Forecasts

There are several different ways to compute forecasts. The two most commonly used strategies are: one-step-ahead (open-loop) forecasting and multi-step (closed-loop) forecasting.

Single-Step or Open-Loop Forecasting
This strategy requires all the inputs values to the model to be available. If the previous value of the target is included in the model we can only make forecasts for the next time interval, thus the single-step name. For the demand forecast example above, we would only be able to forecast the demand for one day in the future (Day 11). In order to compute Y_12, the forecast for Day 12, we would need to wait until the actual values for Day 11 were available. In other words, let say that we have trained a SVM regression model with target Y and inputs LAG(Y,1) and LAG(Y,2). Let the output (predicted value) computed by the model prediction be designated by P(.,.). Then:
  • forecast Y_11 as P(Y_10,Y_9)
  • forecast Y_12 as P(Y_11,Y_10)
  • and so on
Single-step forecasts can be directly computed using Oracle Data Miner's Apply and Test mining activities or the SQL Prediction function. This is illustrated in Part 2 of this series.

Multi-step or Closed-Loop Forecasting
This strategy uses actual values for the inputs when available and estimates or predicted values when the actual values are not available. Let's say that we have trained a SVM regression model with target Y and inputs LAG(Y,1) and LAG(Y,2). Again, let the output (predicted value) computed by the model prediction be designated by P(.,.). Then:
  • forecast Y_11 as P_11 = P(Y_10,Y_9)
  • forecast Y_12 as P_12 = P(P_11,Y_10)
  • and so on
In this case we can generate a forecast for Y_12 even if we do not have the actual value for Y_11. The strategy is to use the predictions for previous time steps as estimates for the missing inputs.

Multi-step forecasts can be computed using a simple PL/SQL procedure. This is illustrated in Part 3 of this series.

Comparison with Traditional Time Series Techniques

SVM regression offers the same benefits for time series forecasting as those of feedfoward neural networks, but with simpler training. The advantages of using such models include:
  • The ability to model very complex functions
  • The ability to use a large number of variables in the model and to include other data (i.e., fundamental and technical factors) in addition to lagged time series data
The methodology described here creates non-linear autoregressive models. These models are very powerful and have been used in areas such as: financial forecast, electric load forecast, chaos modeling, and sunspot prediction. In contrast, ARIMA, a popular time series forecasting technique, supports models with both autoregressive and moving average components. However, ARIMA models are linear, while SVM regression models can capture non-linear relationships.

This is Part 1 in a series on time series forecasting - The full series is Part 1, Part 2, and Part 3


Readings: Business intelligence, Data mining, Oracle analytics

Labels:

About me

  • Marcos M. Campos: Development Manager for Oracle Data Mining Technologies. Previously Senior Scientist with Thinking Machines. Over the years I have been working on transforming databases into easy to use analytical servers.
  • My profile

Disclaimer

  • Opinions expressed are entirely my own and do not reflect the position of Oracle or any other corporation. The views and opinions expressed by visitors to this blog are theirs and do not necessarily reflect mine.
  • This work is licensed under a Creative Commons license.
  • Creative Commons License

Email-Digest



Feeds

Search


Posts

All Posts

Category Cloud

Links

Locations of visitors to this page
Powered by Blogger
Get Firefox