3.3.4. Results

For the purpose of finding a suitable function for benefits transfer, different meta-regression models become specified: (i) different functional forms (e.g., a simple linear form versus semi-log form); (ii) a fully specified model including all independent variables and a restricted model on grounds of statistical significance or econometric problems (e.g., multicollinearity); (iii) robust consistent standard errors to correct for heteroskedasticity.

As shown by the test for heteroskedasticity (see Table 3.7), a simple linear form has heteroskedasticity. There are several ways to correct for heteroskedasticity (e.g., GLS, WLS, robust consistent errors, and data transformation). For this study, robust consistent standard errors and data transformation (e.g., the log transformation of the dependent variable) are utilized. All independent variables initially are considered, even if later dropped on grounds of statistical significance or econometric problems (e.g., multicollinearity). Some variables (e.g., MSW and ACTIV) are dropped because the variables have multicollinearity and/or are statistically insignificant at the 20% level for optimizing the meta-regression transfer model (suggested by Rosenberger and Loomis (2001, 2003).

A wide range of diagnostic tests has been conducted on each regression for benefits transfer (suggested by Walton et al. 2006). The R^2 for the overall fit of the regression, hypothesis tests (F tests and t tests), and diagnostic works (e.g., skewness-kurtosis normality test, Ramsey’s RESET test for the specification error bias, heteroskedasticity test, and multicollinearity assessment) are reported.

The F test assesses the null hypothesis that all or some coefficients ( ) on the model’s explanatory variables equal zero i.e., 〖H_0: β 〗_1= β_2=⋯= β_k=0 for all or some coefficients (Wooldridge 2003). A linear restriction test on some coefficients is useful before dropping the variables when some variables are unreliable due to multicollinearity (Hamilton 2004).

An important issue when handling small samples is the potential for multicollinearity which has a high degree of linear relationships between explanatory variables (Walton et al. 2006). The high correlation between estimated coefficients on explanatory variables in small samples can produce possible concerns: (i) substantially higher standard errors with lower t statistics (a greater chance of falsely accepting the null hypothesis in standard significance tests); (ii) unexpected changes in coefficient magnitudes or signs; and (iii) statistically insignificant coefficients despite the high R^2 (Hamilton 2004). A number of tests to indicate the presence and severity of multicollinearity exist (e.g., Durbin-Watson tests, VIF, Tolerance, and a correlation matrix between estimated coefficients). One test is the variance inflation factor (VIF) which measures the degree to which the variance and standard error of an estimated coefficient increase because of the inclusion of the explanatory variable (i.

For the purpose of finding a suitable function for benefits transfer, different meta-regression models become specified: (i) different functional forms (e.g., a simple linear form versus semi-log form); (ii) a fully specified model including all independent variables and a restricted model on grounds of statistical significance or econometric problems (e.g., multicollinearity); (iii) robust consistent standard errors to correct for heteroskedasticity.

As shown by the test for heteroskedasticity (see Table 3.7), a simple linear form has heteroskedasticity. There are several ways to correct for heteroskedasticity (e.g., GLS, WLS, robust consistent errors, and data transformation). For this study, robust consistent standard errors and data transformation (e.g., the log transformation of the dependent variable) are utilized. All independent variables initially are considered, even if later dropped on grounds of statistical significance or econometric problems (e.g., multicollinearity). Some variables (e.g., MSW and ACTIV) are dropped because the variables have multicollinearity and/or are statistically insignificant at the 20% level for optimizing the meta-regression transfer model (suggested by Rosenberger and Loomis (2001, 2003).

A wide range of diagnostic tests has been conducted on each regression for benefits transfer (suggested by Walton et al. 2006). The R^2 for the overall fit of the regression, hypothesis tests (F tests and t tests), and diagnostic works (e.g., skewness-kurtosis normality test, Ramsey’s RESET test for the specification error bias, heteroskedasticity test, and multicollinearity assessment) are reported.

The F test assesses the null hypothesis that all or some coefficients ( ) on the model’s explanatory variables equal zero i.e., 〖H_0: β 〗_1= β_2=⋯= β_k=0 for all or some coefficients (Wooldridge 2003). A linear restriction test on some coefficients is useful before dropping the variables when some variables are unreliable due to multicollinearity (Hamilton 2004).

An important issue when handling small samples is the potential for multicollinearity which has a high degree of linear relationships between explanatory variables (Walton et al. 2006). The high correlation between estimated coefficients on explanatory variables in small samples can produce possible concerns: (i) substantially higher standard errors with lower t statistics (a greater chance of falsely accepting the null hypothesis in standard significance tests); (ii) unexpected changes in coefficient magnitudes or signs; and (iii) statistically insignificant coefficients despite the high R^2 (Hamilton 2004). A number of tests to indicate the presence and severity of multicollinearity exist (e.g., Durbin-Watson tests, VIF, Tolerance, and a correlation matrix between estimated coefficients). One test is the variance inflation factor (VIF) which measures the degree to which the variance and standard error of an estimated coefficient increase because of the inclusion of the explanatory variable (i.

Related

- Good Essays
## Correlation And Regression Analysis

- 911 Words
- 2 Pages

Please identify (1) what they are, (2) where to find them on SPSS, and (3) how you know if you have met each of the assumptions. Correlation: Linearity- Assumption that there is a linear relationship between you predictor variable and the outcome variable that you are testing. You are able to check for this in SPSS by generating scatterplots and inserting the line of best fit. If you have a consistent alignment of points in a linear fashion going along with the line of best fit you meet the assumption. If your points are all over and not in a linear fashion, linearity is not met.

- 911 Words
- 2 Pages

Good Essays - Good Essays
## Descriptive Statistics: Raw Data

- 756 Words
- 2 Pages

Tabachnick and Fidell (1996) suggest the value of skewness and kurtosis is equal to zero if the distribution of a variable is normal. Chou and Bentler (1995) emphases the absolute values of univariate skewness indices greater than 3 can be described as extremely skewed. Meanwhile, a threshold value of kurtosis greater than 10 can be considered problematic and value greater than 20 can be considered as having serious problems (Hoyle, 1995; Kline, 1998).

- 756 Words
- 2 Pages

Good Essays - Satisfactory Essays
## Factorial Design Case Study

- 1118 Words
- 3 Pages

In addition, a cell means is used with mixed procedures (Jackson, 2012). 3. What is the difference between a complete factorial design and an incomplete factorial design? The complete factorial design consists of all combinations of all factor-levels of each factor; and it can estimate all factors and their interactions (Collins, Dziak, & Li, 2009; Jackson, 2012). In addition, the fixed-level designs may be calculated (Collins, Dziak, & Li, 2009).

- 1118 Words
- 3 Pages

Satisfactory Essays - Satisfactory Essays
## impact of work motivation factor towards job satisfaction

- 1139 Words
- 3 Pages

Distribution of the residual in multiple regressions should follow normal distribution (Lind and Marchal and Whaten, 2008). There are two ways to conduct the normality test and... ... middle of paper ... ...f-test is the overall evaluator for the whole model and t-test is the evaluator for each of the independent variable. Thus, in this T-Test According to Cooper and Schindler (2011), t-test is a test to know the statistical significance of an independent variable towards dependent variable. Writers will compare the results of the t-test with the ANOVA table. Writer will compare the results of the T-test with the significance level of the research.

- 1139 Words
- 3 Pages

Satisfactory Essays - Satisfactory Essays
## Essay On Probability Distribution

- 1687 Words
- 4 Pages

Probability Distribution Functions I summarize here some of the more common distributions utilized in probability and statistics. Some are more consequential than others, and not all of them are utilized in all fields.For each distribution, I give the denomination of the distribution along with one or two parameters and betoken whether it is a discrete distribution or a perpetual one. Then I describe an example interpretation for a desultory variable X having that distribution. Each discrete distribution is tenacious by a probability mass function f which gives the probabilities for the sundry outcomes, so that f(x) = P (X=x), the probability that an arbitrary variable X with that distribution takes on the value x. Each perpetual distribution is tenacious by a probability density function f, which, when integrated from a to b gives you the probability P (a ≤ X ≤ b).

- 1687 Words
- 4 Pages

Satisfactory Essays - Satisfactory Essays
## Interpretative Problems In Multicollinearity

- 719 Words
- 2 Pages

Relatively large residuals finely characterized regression outliers. The farther the observation is from the mean of (either in a positive or negative direction), the greater is its leverage. ( Bagheri et al, 2010) The leverage points usually classified as good leverage points and bad leverage points. Good leverage points always consistent with the true regression line. Hence, bad leverage points are observations that not only deviate from the regression line that best fits the data but also fall far from the majority of the explanatory variables in the data set (see Montgomery et al., 2001; Kamruzzaman and Imon, 2002; Kutner et al., 2005; Chatterjee and Hadi, 2006).

- 719 Words
- 2 Pages

Satisfactory Essays - Powerful Essays
The mean is sensitive to tremendously large or small values. The maximum mean from the table is product acceptance associated to the other variable. Standard deviation is the square root of the variance. It measures the spread of a set of observations. The greater the standard deviation is, the more spread out the observations are.

- 1030 Words
- 3 Pages

Powerful Essays - Good Essays
## How is Quality Measured?

- 1524 Words
- 4 Pages

Lucas and Saccucci (1990) evaluated the properties of an EWMA control scheme used to monitor the mean of a normally distributed process that may experience shifts away from the target value. A design procedure for EWMA control schemes ... ... middle of paper ... ... number was determined for exponentiated Weilbull distributions when the consumer’s risk ,test end time and the group are required. The operating characteristic values are obtained at different quality levels. Aslam et al (2013) proposed the optimal design of skip lot group acceptance to the Weilbull distribution and generalized exponential distribution sampling schemes.in this an article proposed the skip lot sampling plan of type SKSP-2 using a group acceptance sampling for a time truncated experiment as the reference plan, called SKGSP-2 , when the product follows either the generalized exponential distribution or the Weibulll distribution provided tables for industrial use. The proposed plan was found efficient than the existing plan to reach a decision on the lots offered.

- 1524 Words
- 4 Pages

Good Essays - Good Essays
## The Ljung Box Test

- 1720 Words
- 4 Pages

We would then square the residuals from the return equation and regress the squared residuals on its lags. The R2 from this regression can be used to construct the test statistic, where T is the number of observations and the degrees of freedom, q, is the number of lagged squared residuals in the test equation. The null hypothesis for this test is that all the coefficients on the lagged squared residuals are equal to zero. The alternative hypothesis is that at least one of the coefficients

- 1720 Words
- 4 Pages

Good Essays - Satisfactory Essays
## Regression Analysis And Multiple Regression

- 659 Words
- 2 Pages

Introduction on regression Regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. Regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quintile, or other location parameter of the conditional distribution of the dependent variable given the independent variables.

- 659 Words
- 2 Pages

Satisfactory Essays