can estimate parameters of this line, such as its slope and intercept from the GLM. From highschool
algebra, recall that straight lines can be represented using the mathematical equation y =
mx + c, where m is the slope of the straight line (how much does y change for unit change in x)
and c is the intercept term (what is the value of y when x is zero). In GLM, this equation is
represented formally as:
y = β0 + β1 x + ε
where β0 is the slope, β1 is the intercept term, and ε is the error term. ε represents the deviation
of actual observations from their estimated values, since most observations are close to the line
but do not fall exactly on the line (i.e., the GLM is not perfect). Note that a linear model can have
more than two predictors. To visualize a linear model with two predictors, imagine a threedimensional
cube, with the outcome (y) along the vertical axis, and the two predictors (say, x1
and x2) along the two horizontal axes along the base of the cube. A line that describes the
relationship between two or more variables is called a regression line, β0 and β1 (and other beta
values) are called regression coefficients, and the process of estimating regression coefficients is
called regression analysis. The GLM for regression analysis with n predictor variables is:
y = β0 + β1 x1 + β2 x2 + β3 x3 + … + βn xn + ε
In the above equation, predictor variables xi may represent independent variables or
covariates (control variables). Covariates are variables that are not of theoretical interest but
may have some impact on the dependent variable y and should be controlled, so that the
residual effects of the independent variables of interest are detected more precisely. Covariates
capture systematic errors in a regression equation while the error term (ε) captures random
errors. Though most variables in the GLM tend to be interval or ratio-scaled, this does not have
to be the case. Some predictor variables may even be nominal variables (e.g., gender: male or
female), which are coded as dummy variables. These are variables that can assume one of only
two possible values: 0 or 1 (in the gender example, “male” may be designated as 0 and “female”
as 1 or vice versa). A set of n nominal variables is represented using n–1 dummy variables. For
instance, industry sector, consisting of the agriculture, manufacturing, and service sectors, may
be represented using a combination of two dummy variables (x1, x2), with (0, 0) for agriculture,
(0, 1) for manufacturing, and (1, 1) for service. It does not matter which level of a nominal
variable is coded as 0 and which level as 1, because 0 and 1 values are treated as two distinct
groups (such as treatment and control groups in an experimental design), rather than as
numeric quantities, and the statistical parameters of each group are estimated separately.
The GLM is a very powerful statistical tool because it is not one single statistical method,
but rather a family of methods that can be used to conduct sophisticated analysis with different
types and quantities of predictor and outcome variables. If we have a dummy predictor
variable, and we are comparing the effects of the two levels (0 and 1) of this dummy variable on
the outcome variable, we are doing an analysis of variance (ANOVA). If we are doing ANOVA
while controlling for the effects of one or more covariate, we have an analysis of covariance
(ANCOVA). We can also have multiple outcome variables (e.g., y1, y1, … yn), which are
represented using a “system of equations” consisting of a different equation for each outcome
variable (each with its own unique set of regression coefficients). If multiple outcome variables
are modeled as being predicted by the same set of predictor variables, the resulting analysis is
called multivariate regression. If we are doing ANOVA or ANCOVA analysis with multiple
outcome variables, the resulting analysis is a multivariate ANOVA (MANOVA) or multivariate
ANCOVA (MANCOVA) respectively. If we model the outcome in one regression equation as a
132 | S o c i a l S c i e n c e R e s e a r c h
predictor in another equation in an interrelated system of regression equations, then we have a
very sophisticated type of analysis called structural equation modeling. The most important
problem in GLM is model specification, i.e., how to specify a regression equation (or a system of
equations) to best represent the phenomenon of interest. Model specification should be based
on theoretical considerations about the phenomenon being studied, rather than what fits the
observed data best. The role of data is in validating the model, and not in its specification.
Two-Group Comparison
One of the simplest inferential analyses is comparing the post-test outcomes of
treatment and control group subjects in a randomized post-test only control group design, such
as whether students enrolled to a special program in mathematics perform better than those in
a traditional math curriculum. In this case, the predictor variable is a dummy variable
(1=treatment group, 0=control group), and the outcome variable, performance, is ratio scaled
(e.g., score of a math test following the special program). The analytic technique for this simple
design is a one-way ANOVA (one-way because it involves only one predictor variable), and the
statistical test used is called a Student’s t-test (or t-test, in short).
The t-test was introduced in 1908 by William Sealy Gosset, a chemist working for the
Guiness Brewery in Dublin, Ireland to monitor the quality of stout – a dark beer popular with
19th century porters in London. Because his employer did not want to reveal the fact that it was
using statistics for quality control, Gosset published the test in Biometrika using his pen name
“Student” (he was a student of Sir Ronald Fisher), and the test involved calculating the value of
t, which was a letter used frequently by Fisher to denote the difference between two groups.
Hence, the name Student’s t-test, although Student’s identity was known to fellow statisticians.
The t-test examines whether the means of two groups are statistically different from
each other (non-directional or two-tailed test), or whether one group has a statistically larger
(or smaller) mean than the other (directional or one-tailed test). In our example, if we wish to
examine whether students in the special math curriculum perform better than those in
traditional curriculum, we have a one-tailed test. This hypothesis can be stated as:
H0: μ1 ≤ μ2 (null hypothesis)
H1: μ1 > μ2 (alternative hypothesis)
where μ1 represents the mean population performance of students exposed to the special
curriculum (treatment group) and μ2 is the mean population performance of students with
traditional curriculum (control group). Note that the null hypothesis is always the one with the
“equal” sign, and the goal of all statistical significance tests is to reject the null hypothesis.
How can we infer about the difference in population means using data from samples
drawn from each population? From the hypothetical frequency distributions of the treatment
and control group scores in Figure 15.2, the control group appears to have a bell-shaped
(normal) distribution with a mean score of 45 (on a 0-100 scale), while the treatment group
appear to have a mean score of 65. These means look different, but they are really sample
means ( ), which may differ from their corresponding population means (μ) due to sampling
error. Sample means are probabilistic estimates of population means within a certain
confidence interval (95% CI is sample mean + two standard errors, where standard error is the
standard deviation of the distribution in sample means as taken from infinite samples of the
population. Hence, statistical significance of population means depends not only on sample
Q u a n t i t a t i v e A n a l y s i s : I n f e r e n t i a l S t a t i s t i c s | 133
mean scores, but also on the standard error or the degree of spread in the frequency
distribution of the sample means. If the spread is large (i.e., the two bell-shaped curves have a
lot of overlap), then the 95% CI of the two means may also be overlapping, and we cannot
conclude with high probability (p<0.05) that that their corresponding population means are
significantly different. However, if the curves have narrower spreads (i.e., they are less
overlapping), then the CI of each mean may not overlap, and we reject the null hypothesis and
say that the population means of the two groups are significantly different at p<0
Add Your Gadget Here
HIGHLIGHT OF THE WEEK
-
Survey Research Survey research a research method involving the use of standardized questionnaires or interviews to collect data about peop...
-
Inter-rater reliability. Inter-rater reliability, also called inter-observer reliability, is a measure of consistency between two or more i...
-
discriminant validity is exploratory factor analysis. This is a data reduction technique which aggregates a given set of items to a smalle...
-
can estimate parameters of this line, such as its slope and intercept from the GLM. From highschool algebra, recall that straight lines can...
-
Positivist Case Research Exemplar Case research can also be used in a positivist manner to test theories or hypotheses. Such studies are ra...
-
Quantitative Analysis: Descriptive Statistics Numeric data collected in a research project can be analyzed quantitatively using statistical...
-
Probability Sampling Probability sampling is a technique in which every unit in the population has a chance (non-zero probability) of being...
-
Experimental Research Experimental research, often considered to be the “gold standard” in research designs, is one of the most rigorous of...
-
Bivariate Analysis Bivariate analysis examines how two variables are related to each other. The most common bivariate statistic is the biva...
-
Case Research Case research, also called case study, is a method of intensively studying a phenomenon over time within its natural setting ...
Sunday, 13 March 2016
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment