Ghozali (2001) stated a variable is reliable if the responds from the respondents are consistent across the research. The purpose of the reliability test is to measure the consistency of the measurement items. Writer are going to measure the variable‘s cronbach alpha to measure the internal consistency of variables. Ghozali (2001) added that if the cronbach alpha value of a variable is higher than 0.6 it means that the variable is reliable. Statistical Method Classic Assumption Test When a research uses multiple regressions as the statistical tool, that research directly use several assumption (Lind and Marchal and Whaten, 2008).
Data collected were analyzed by using three approaches: 1. Cronbach’s alpha (a) was used to test the reliability. Cronbach’s alpha indicates how well the items in a set are positively correlated to one another. This is to make sure that the scales are free of random or unstable errors and produce consistent results over time (Cooper & Schindler, 1998); 2. Descriptive statistics where the researcher used mean, standard deviation and variance to get an idea on how the respondents reacted to the items in the questionnaire.
Secondly, the linear regression analysis requires all variables to be multivariate normal. This assumption can best be checked with a histogram or a Q-Q-Plot. Normality can be checked with a goodness of fit test, e.g., the Kolmogorov-Smirnov test. When the data is not normally distributed a non-linear transformation (e.g., log-transformation) might fix this issue. 3.
Chi-Square is a statistical test that is utilized to make comparisons of observed data with data that the researcher expects to find with respect to a specified hypothesis. The test is used to determine whether the deviations in the data observed from the expected data have occurred just by chance or is caused by other factors (Brooks, 2008). The Chi-Square is usually employed to test the null hypothesis. For instance, it can be used to test whether there is no significant difference between the expected and observed outcomes. The Chi-Square is used in two circumstances as below: i) When the researcher want to estimate how closely the observed distribution matches the proportions that is expected.
The outcomes of the analysis, though, require to be understood with concern, predominantly when looking for a fundamental association or when using the regression equation for prediction. Correlation and linear regression analysis are statistical procedures to enumerate associations between an independent, every now and then called a predictor, variable (X) and a continuous dependent outcome variable (Y). For correlation study, the independent variable (X) can be continuous or ordinal. Regression analysis can also accommodate dichotomous independent variables. The procedures described here presume that the association between the independent and dependent variables is linear.
Introduction on regression Regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. Regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quintile, or other location parameter of the conditional distribution of the dependent variable given the independent variables.
In the multiple regression, one uses additional independent variables that help better explain or predict the dependent variable (Y). Multiple regression analysis can be either a descriptive or an inferential technique. The regression analysis was used to test one research hypotheses (H1 to H5) The formula is: Y’ = a + b1 X1 + b1 X1 + b1 X3 +……………+ bk Xk Where: a = the Y-intercept, the estimated value of Y when X’s are zero. b = the amount by which Y changes when X changes by one unit with all other values held the same.
The equally-weighted root expected square difference (ewREMSD) was used in the study to give equal weight to all score points and to examine the impact of the subgroups on the test takers success or failure designations. The standardized root squared difference (RSD) was used to determine the equating difference between the subgroup, where the RSD index compares equating functions for the non-repeaters and repeaters to the total groups. The RWSD compared the subgroups by pairing and assigning an index to them and to detect... ... middle of paper ... ...ed 51.8% (n=2905) of the study, while 42% are from the upper SES (n=2699). Table 1 also shows that 51.9% of the students are females. Racial/ethnic groups were separated into six categories (whites, African Americans, Hispanics, American Indians, Asian or Pacific Islander and other races).
3.3.4. Results For the purpose of finding a suitable function for benefits transfer, different meta-regression models become specified: (i) different functional forms (e.g., a simple linear form versus semi-log form); (ii) a fully specified model including all independent variables and a restricted model on grounds of statistical significance or econometric problems (e.g., multicollinearity); (iii) robust consistent standard errors to correct for heteroskedasticity. As shown by the test for heteroskedasticity (see Table 3.7), a simple linear form has heteroskedasticity. There are several ways to correct for heteroskedasticity (e.g., GLS, WLS, robust consistent errors, and data transformation). For this study, robust consistent standard errors and data transformation (e.g., the log transformation of the dependent variable) are utilized.
Association is based on how two variables simultaneously change together; the notion of co-variation. Bivariate descriptive statistics involves simultaneously analyzing (comparing) two variables to determine if there is a relationship between the variables. The purpose of this chapter is to go beyond Univariate statistics, in which the analysis focuses on one variable at a time. To do this analysis we used cross tabulation technique for finding association among variables. Initially we test that two variables are associated or not.