spreg.GM_Combo_Hom_Regimes

class spreg.GM_Combo_Hom_Regimes(y, x, regimes, yend=None, q=None, w=None, w_lags=1, lag_q=True, cores=False, max_iter=1, epsilon=1e-05, A1='het', constant_regi='many', cols2regi='all', regime_err_sep=False, regime_lag_sep=False, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None, name_regimes=None)[source]

GMM method for a spatial lag and error model with homoskedasticity, regimes and endogenous variables, with results and diagnostics; based on Drukker et al. (2013) [DEP13], following Anselin (2011) [Ans11].

Parameters
yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant

yendarray

Two dimensional array with n rows and one column for each endogenous variable

qarray

Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (note: this should not contain any variables from x)

regimeslist

List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’.

wpysal W object

Spatial weights object (always needed)

constant_regi: string

Switcher controlling the constant term setup. It may take the following values:

  • ‘one’: a vector of ones is appended to x and held constant across regimes.

  • ‘many’: a vector of ones is appended to x and considered different per regime (default).

cols2regilist, ‘all’

Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’ (default), all the variables vary by regime.

regime_err_sepboolean

If True, a separate regression is run for each regime.

regime_lag_sepboolean

If True, the spatial parameter for spatial lag is also computed according to different regimes. If False (default), the spatial parameter is fixed across regimes.

w_lagsinteger

Orders of W to include as instruments for the spatially lagged dependent variable. For example, w_lags=1, then instruments are WX; if w_lags=2, then WX, WWX; and so on.

lag_qboolean

If True, then include spatial lags of the additional instruments (q).

max_iterint

Maximum number of iterations of steps 2a and 2b from [ADKP10]. Note: epsilon provides an additional stop condition.

epsilonfloat

Minimum change in lambda required to stop iterations of steps 2a and 2b from [ADKP10]. Note: max_iter provides an additional stop condition.

A1string

If A1=’het’, then the matrix A1 is defined as in [ADKP10]. If A1=’hom’, then as in [Ans11]. If A1=’hom_sc’, then as in [DEP13] and [DPR13].

vmboolean

If True, include variance-covariance matrix in summary results

coresboolean

Specifies if multiprocessing is to be used Default: no multiprocessing, cores = False Note: Multiprocessing may not work on all platforms.

name_ystring

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_yendlist of strings

Names of endogenous variables for use in output

name_qlist of strings

Names of instruments for use in output

name_wstring

Name of weights matrix for use in output

name_dsstring

Name of dataset for use in output

name_regimesstring

Name of regime variable for use in the output

Examples

We first need to import the needed modules, namely numpy to convert the data we read into arrays that spreg understands and pysal to perform all the analysis.

>>> import numpy as np
>>> import libpysal

Open data on NCOVR US County Homicides (3085 areas) using libpysal.io.open(). This is the DBF associated with the NAT shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.

>>> db = libpysal.io.open(libpysal.examples.get_path("NAT.dbf"),'r')

Extract the HR90 column (homicide rates in 1990) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.

>>> y_var = 'HR90'
>>> y = np.array([db.by_col(y_var)]).reshape(3085,1)

Extract UE90 (unemployment rate) and PS90 (population structure) vectors from the DBF to be used as independent variables in the regression. Other variables can be inserted by adding their names to x_var, such as x_var = [‘Var1’,’Var2’,’…] Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in.

>>> x_var = ['PS90','UE90']
>>> x = np.array([db.by_col(name) for name in x_var]).T

The different regimes in this data are given according to the North and South dummy (SOUTH).

>>> r_var = 'SOUTH'
>>> regimes = db.by_col(r_var)

Since we want to run a spatial combo model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from NAT.shp.

>>> w = libpysal.weights.Rook.from_shapefile(libpysal.examples.get_path("NAT.shp"))

Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, this allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:

>>> w.transform = 'r'

We are all set with the preliminaries, we are good to run the model. In this case, we will need the variables and the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.

Example only with spatial lag

The Combo class runs an SARAR model, that is a spatial lag+error model. In this case we will run a simple version of that, where we have the spatial effects as well as exogenous variables. Since it is a spatial model, we have to pass in the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional. We can have a summary of the output by typing: model.summary Alternatively, we can check the betas:

>>> reg = GM_Combo_Hom_Regimes(y, x, regimes, w=w, A1='hom_sc', name_y=y_var, name_x=x_var, name_regimes=r_var, name_ds='NAT')
>>> print reg.name_z
['0_CONSTANT', '0_PS90', '0_UE90', '1_CONSTANT', '1_PS90', '1_UE90', '_Global_W_HR90', 'lambda']
>>> print np.around(reg.betas,4)
[[ 1.4607]
 [ 0.9579]
 [ 0.5658]
 [ 9.1129]
 [ 1.1339]
 [ 0.6517]
 [-0.4583]
 [ 0.6634]]

This class also allows the user to run a spatial lag+error model with the extra feature of including non-spatial endogenous regressors. This means that, in addition to the spatial lag and error, we consider some of the variables on the right-hand side of the equation as endogenous and we instrument for this. In this case we consider RD90 (resource deprivation) as an endogenous regressor. We use FP89 (families below poverty) for this and hence put it in the instruments parameter, ‘q’.

>>> yd_var = ['RD90']
>>> yd = np.array([db.by_col(name) for name in yd_var]).T
>>> q_var = ['FP89']
>>> q = np.array([db.by_col(name) for name in q_var]).T

And then we can run and explore the model analogously to the previous combo:

>>> reg = GM_Combo_Hom_Regimes(y, x, regimes, yd, q, w=w, A1='hom_sc', name_y=y_var, name_x=x_var, name_yend=yd_var, name_q=q_var, name_regimes=r_var, name_ds='NAT')
>>> print reg.name_z
['0_CONSTANT', '0_PS90', '0_UE90', '1_CONSTANT', '1_PS90', '1_UE90', '0_RD90', '1_RD90', '_Global_W_HR90', 'lambda']
>>> print reg.betas
[[ 3.4196478 ]
 [ 1.04065595]
 [ 0.16630304]
 [ 8.86570777]
 [ 1.85134286]
 [-0.24921597]
 [ 2.43007651]
 [ 3.61656899]
 [ 0.03315061]
 [ 0.22636055]]
>>> print np.sqrt(reg.vm.diagonal())
[ 0.53989913  0.13506086  0.06143434  0.77049956  0.18089997  0.07246848
  0.29218837  0.25378655  0.06184801  0.06323236]
>>> print 'lambda: ', np.around(reg.betas[-1], 4)
lambda:  [ 0.2264]
Attributes
summarystring

Summary of regression results and diagnostics (note: use in conjunction with the print command)

betasarray

kx1 array of estimated coefficients

uarray

nx1 array of residuals

e_filteredarray

nx1 array of spatially filtered residuals

e_predarray

nx1 array of residuals (using reduced form)

predyarray

nx1 array of predicted y values

predy_earray

nx1 array of predicted y values (using reduced form)

ninteger

Number of observations

kinteger

Number of variables for which coefficients are estimated (including the constant) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

yendarray

Two dimensional array with n rows and one column for each endogenous variable Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

qarray

Two dimensional array with n rows and one column for each external exogenous variable used as instruments Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

zarray

nxk array of variables (combination of x and yend) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

harray

nxl array of instruments (combination of x and q) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

iter_stopstring

Stop criterion reached during iteration of steps 2a and 2b from [ADKP10]. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

iterationinteger

Number of iterations of steps 2a and 2b from [ADKP10]. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

mean_yfloat

Mean of dependent variable

std_yfloat

Standard deviation of dependent variable

vmarray

Variance covariance matrix (kxk)

pr2float

Pseudo R squared (squared correlation between y and ypred) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

pr2_efloat

Pseudo R squared (squared correlation between y and ypred_e (using reduced form)) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

sig2float

Sigma squared used in computations (based on filtered residuals) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

std_errarray

1xk array of standard errors of the betas Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

z_statlist of tuples

z statistic; each tuple contains the pair (statistic, p-value), where each is a float Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

name_ystring

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_yendlist of strings

Names of endogenous variables for use in output

name_zlist of strings

Names of exogenous and endogenous variables for use in output

name_qlist of strings

Names of external instruments

name_hlist of strings

Names of all instruments used in ouput

name_wstring

Name of weights matrix for use in output

name_dsstring

Name of dataset for use in output

name_regimesstring

Name of regimes variable for use in output

titlestring

Name of the regression method used Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)

regimeslist

List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’.

constant_registring

Ignored if regimes=False. Constant option for regimes. Switcher controlling the constant term setup. It may take the following values:

  • ‘one’: a vector of ones is appended to x and held constant across regimes.

  • ‘many’: a vector of ones is appended to x and considered different per regime (default).

cols2regilist, ‘all’

Ignored if regimes=False. Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’, all the variables vary by regime.

regime_err_sepboolean

If True, a separate regression is run for each regime.

regime_lag_sepboolean

If True, the spatial parameter for spatial lag is also computed according to different regimes. If False (default), the spatial parameter is fixed across regimes.

krint

Number of variables/columns to be “regimized” or subject to change by regime. These will result in one parameter estimate by regime for each variable (i.e. nr parameters per variable)

kfint

Number of variables/columns to be considered fixed or global across regimes and hence only obtain one parameter estimate

nrint

Number of different regimes in the ‘regimes’ list

multidictionary

Only available when multiple regressions are estimated, i.e. when regime_err_sep=True and no variable is fixed across regimes. Contains all attributes of each individual regression

__init__(self, y, x, regimes, yend=None, q=None, w=None, w_lags=1, lag_q=True, cores=False, max_iter=1, epsilon=1e-05, A1='het', constant_regi='many', cols2regi='all', regime_err_sep=False, regime_lag_sep=False, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None, name_regimes=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(self, y, x, regimes[, yend, q, w, …])

Initialize self.

Attributes

mean_y

std_y