spreg.GM_Error

class spreg.GM_Error(y, x, w, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None)[source]

GMM method for a spatial error model, with results and diagnostics; based on Kelejian and Prucha (1998, 1999) [KP98] [KP99].

Parameters
yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant

wpysal W object

Spatial weights object (always needed)

vmboolean

If True, include variance-covariance matrix in summary results

name_ystring

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_wstring

Name of weights matrix for use in output

name_dsstring

Name of dataset for use in output

Examples

We first need to import the needed modules, namely numpy to convert the data we read into arrays that spreg understands and pysal to perform all the analysis.

>>> import libpysal
>>> import numpy as np

Open data on Columbus neighborhood crime (49 areas) using libpysal.io.open(). This is the DBF associated with the Columbus shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.

>>> dbf = libpysal.io.open(libpysal.examples.get_path('columbus.dbf'),'r')

Extract the HOVAL column (home values) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.

>>> y = np.array([dbf.by_col('HOVAL')]).T

Extract CRIME (crime) and INC (income) vectors from the DBF to be used as independent variables in the regression. Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this class adds a vector of ones to the independent variables passed in.

>>> names_to_extract = ['INC', 'CRIME']
>>> x = np.array([dbf.by_col(name) for name in names_to_extract]).T

Since we want to run a spatial error model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations into the error component of the model. To do that, we can open an already existing gal file or create a new one. In this case, we will use columbus.gal, which contains contiguity relationships between the observations in the Columbus dataset we are using throughout this example. Note that, in order to read the file, not only to open it, we need to append ‘.read()’ at the end of the command.

>>> w = libpysal.io.open(libpysal.examples.get_path("columbus.gal"), 'r').read() 

Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, his allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:

>>> w.transform='r'

We are all set with the preliminars, we are good to run the model. In this case, we will need the variables and the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.

>>> model = GM_Error(y, x, w=w, name_y='hoval', name_x=['income', 'crime'], name_ds='columbus')

Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them. Note that because we are running the classical GMM error model from 1998/99, the spatial parameter is obtained as a point estimate, so although you get a value for it (there are for coefficients under model.betas), you cannot perform inference on it (there are only three values in model.se_betas).

>>> print model.name_x
['CONSTANT', 'income', 'crime', 'lambda']
>>> np.around(model.betas, decimals=4)
array([[ 47.6946],
       [  0.7105],
       [ -0.5505],
       [  0.3257]])
>>> np.around(model.std_err, decimals=4)
array([ 12.412 ,   0.5044,   0.1785])
>>> np.around(model.z_stat, decimals=6) 
array([[  3.84261100e+00,   1.22000000e-04],
       [  1.40839200e+00,   1.59015000e-01],
       [ -3.08424700e+00,   2.04100000e-03]])
>>> round(model.sig2,4)
198.5596
Attributes
summarystring

Summary of regression results and diagnostics (note: use in conjunction with the print command)

betasarray

kx1 array of estimated coefficients

uarray

nx1 array of residuals

e_filteredarray

nx1 array of spatially filtered residuals

predyarray

nx1 array of predicted y values

ninteger

Number of observations

kinteger

Number of variables for which coefficients are estimated (including the constant)

yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant

mean_yfloat

Mean of dependent variable

std_yfloat

Standard deviation of dependent variable

pr2float

Pseudo R squared (squared correlation between y and ypred)

vmarray

Variance covariance matrix (kxk)

sig2float

Sigma squared used in computations

std_errarray

1xk array of standard errors of the betas

z_statlist of tuples

z statistic; each tuple contains the pair (statistic, p-value), where each is a float

name_ystring

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_wstring

Name of weights matrix for use in output

name_dsstring

Name of dataset for use in output

titlestring

Name of the regression method used

__init__(self, y, x, w, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(self, y, x, w[, vm, name_y, …])

Initialize self.

Attributes

mean_y

std_y