# Durbin Watson Statistic Definition

### What is the Durbin Watson statistic?

The Durbin Watson (DW) statistic is an autocorrelation test in the residuals of a statistical regression analysis. The Durbin-Watson statistic will always have a value between 0 and 4. A value of 2.0 means that no autocorrelation is detected in the sample. Values ​​from 0 to less than 2 indicate positive autocorrelation and values ​​from 2 to 4 indicate negative autocorrelation.

A stock price with positive autocorrelation would indicate that yesterday’s price has a positive correlation with today’s price. So if the stock fell yesterday, it is also likely to drop today. A stock that has a negative autocorrelation, on the other hand, has a negative influence on itself over time – so that if it drops yesterday, it is more likely to increase today.

### Key points to remember

• The Durbin Watson statistic is an autocorrelation test in a dataset.
• The DW statistic always has a value between zero and 4.0.
• A value of 2.0 means that no autocorrelation is detected in the sample. Values ​​from zero to 2.0 indicate positive autocorrelation and values ​​from 2.0 to 4.0 indicate negative autocorrelation.
• Autocorrelation can be useful in technical analysis, which is most concerned with security price trends using graphing techniques instead of financial health or business management.

### The basics of Durbin Watson’s statistics

Autocorrelation, also known as serial correlation, can be a significant problem in analyzing historical data if you don’t know how to search for it. For example, since stock prices tend not to change too drastically from one day to the next, prices from one day to the next could potentially be highly correlated, even if there is little useful information in this observation. In order to avoid autocorrelation problems, the simplest solution in finance is simply to convert a series of historical prices into a series of price changes in percentage from one day to the next.

Autocorrelation can be useful for technical analysis, which is most interested in trends and relationships between stock prices using graphical techniques instead of financial health or running a business. Technical analysts can use autocorrelation to see how past prices of a security affect its future price.

Durbin Watson’s statistics got its name from statisticians James Durbin and Geoffrey Watson.

Autocorrelation can indicate whether there is a momentum factor associated with a stock. For example, if you know that a stock has historically had a high positive autocorrelation value and you have seen the stock make strong gains in the past few days, then you can reasonably expect movements in next few days (the first time series) correspond to those of the late time series and going up.

### Example of Durbin Watson’s statistic

The formula for Durbin Watson’s statistic is quite complex but involves the residuals of an ordinary least squares regression on a data set. The following example shows how to calculate this statistic.

Suppose the following data points (x, y):

The

begin {aligned} & text {Pair One} = left ({10}, {1,100} right) \ & text {Pair Two} = left ({20}, {1,200} right) & text {Pair Three} = left ({35}, {985} right) \ & text {Pair Four} = left ({40}, {750} right) \ & text {Pair Five} = left ({50}, {1,215} right) \ & text {Pair Six} = left ({45}, {1,000} right) \ end {aligned}

ThePair One=(10,1,100)Pair two=(20,1,200)Pair three=(35,985)Pair four=(40,seven50)Pair Five=(50,1,215)Pair Six=(45,1,000)TheThe

Using the methods of least squares regression to find the “line of best fit”, the equation for the line of best fit for these data is:

The

$Y = {- 2.6268} x + {1,129.2}$

Yes=2.6268X+1,129.2The

This first step in calculating the Durbin Watson statistic is to calculate the expected “y” values ​​using the line of the best fit equation. For this dataset, the expected “y” values ​​are:

The

begin {aligned} & text {Expected} Y left ({1} right) = left (- {2.6268} times {10} right) + {1,129.2} = {1,102.9} \ & text {Expected} Y left ({2} right) = left (- {2.6268} times {20} right) + {1,129.2} = {1,076.7} \ & text {Expected} Y left ({ 3} right) = left (- {2.6268} times {35} right) + {1 129.2} = {1 037.3} \ & text {Expected} Y left ({4} right) = left (- {2.6268} times {40} right) + {1,129.2} = {1,024.1} \ & text {Expected} Y left ({5} right) = left (- {2.6268} times {50} right) + {1 129.2} = {997.9} \ & text {Expected} Y left ({6} right) = left (- {2, 6268} times {45} right) + {1,129.2} = {1,011} \ end {aligned}

TheExpectedYes(1)=(2.6268×10)+1,129.2=1,102.9ExpectedYes(2)=(2.6268×20)+1,129.2=1,0seven6.sevenExpectedYes(3)=(2.6268×35)+1,129.2=1,03seven.3ExpectedYes(4)=(2.6268×40)+1,129.2=1,024.1ExpectedYes(5)=(2.6268×50)+1,129.2=99seven.9ExpectedYes(6)=(2.6268×45)+1,129.2=1,011TheThe

Then, the differences between the actual “y” values ​​and the expected “y” values, the errors, are calculated:

The

begin {aligned} & text {Error} left ({1} right) = left ({1,100} – {1,102.9} right) = {- 2.9} \ & text {Error} left ( {2} right) = left ({1,200} – {1,076.7} right) = {123,3} \ & text {Error} left ({3} right) = left ({985} – {1 037.3} right) = {- 52.3} \ & text {Error} left ({4} right) = left ({750} – {1 024,1} right ) = {- 274,1} \ & text {Error} left ({5} right) = left ({1,215} – {997.9} right) = {217.1} \ & text {Error } left ({6} right) = left ({1 000} – {1 011} right) = {- 11} \ end {aligned}

TheFault(1)=(1,1001,102.9)=2.9Fault(2)=(1,2001,0seven6.seven)=123.3Fault(3)=(9851,03seven.3)=52.3Fault(4)=(seven501,024.1)=2seven4.1Fault(5)=(1,21599seven.9)=21seven.1Fault(6)=(1,0001,011)=11TheThe

Then these errors should be square and summed:

The

begin {aligned} & text {Sum of squared errors =} \ & left ({- 2.9} ^ {2} + {123.3} ^ {2} + {- 52.3} ^ {2} + {- 274.1} ^ {2} + {217.1} ^ {2} + {- 11} ^ {2} right) = \ & {140,330.81} \ & text {} \ end {aligned}

TheSum of errors squared =(2.92+123.32+52.32+2seven4.12+21seven.12+112)=140,330.81TheThe

Then, the value of the error minus the previous error is calculated and squared:

The

begin {aligned} & text {Difference} left ({1} right) = left ({123.3} – left ({- 2.9} right) right) = {126.2} \ & text {Difference} left ({2} right) = left ({-52.3} – {123.3} right) = {- 175.6} \ & text {Difference} left ({3} right) = left ({-274.1} – left ({- 52.3} right) right) = {- 221.9} \ & text {Difference} left ({4} right) = left ({217.1} – left ({- 274.1} right) right) = {491.3} \ & text {Difference} left ({5} right) = left ({-11} – {217.1} right) = {- 228.1} \ & text {Sum of square differences} = {389 406.71} \ end {aligned}

TheDifference(1)=(123.3(2.9))=126.2Difference(2)=(52.3123.3)=1seven5.6Difference(3)=(2seven4.1(52.3))=221.9Difference(4)=(21seven.1(2seven4.1))=491.3Difference(5)=(1121seven.1)=228.1Sum of Square Differences=389,406.seven1TheThe

Finally, Durbin Watson’s statistic is the quotient of squared values:

The

$text {Durbin Watson} = {389 406.71} / {140 330.81} = {2.77}$

Durbin Watson=389,406.seven1/140,330.81=2.sevensevenThe

In general, the statistical test values ​​between 1.5 and 2.5 are relatively normal. Any value outside this range may be of concern. The Durbin-Watson statistic, although displayed by many regression analysis programs, is not applicable in some situations. For example, when lagged dependent variables are included in the explanatory variables, it is inappropriate to use this test.