The easiest way to test for the above hypothesis is to look up critical values of r from
statistical tables available in any standard text book on statistics or on the Internet (most
software programs also perform significance testing). The critical value of r depends on our
desired significance level (α = 0.05), the degrees of freedom (df), and whether the desired test is
a one-tailed or two-tailed test. The degree of freedom is the number of values that can vary
freely in any calculation of a statistic. In case of correlation, the df simply equals n – 2, or for the
data in Table 14.1, df is 20 – 2 = 18. There are two different statistical tables for one-tailed and
two-tailed test. In the two-tailed table, the critical value of r for α = 0.05 and df = 18 is 0.44. For
our computed correlation of 0.79 to be significant, it must be larger than the critical value of
0.44 or less than -0.44. Since our computed value of 0.79 is greater than 0.44, we conclude that
there is a significant correlation between age and self-esteem in our data set, or in other words,
the odds are less than 5% that this correlation is a chance occurrence. Therefore, we can reject
the null hypotheses that r ≤ 0, which is an indirect way of saying that the alternative hypothesis
r > 0 is probably correct.
Most research studies involve more than two variables. If there are n variables, then we
will have a total of n*(n-1)/2 possible correlations between these n variables. Such correlations
are easily computed using a software program like SPSS, rather than manually using the
formula for correlation (as we did in Table 14.1), and represented using a correlation matrix, as
shown in Table 14.2. A correlation matrix is a matrix that lists the variable names along the
first row and the first column, and depicts bivariate correlations between pairs of variables in
the appropriate cell in the matrix. The values along the principal diagonal (from the top left to
the bottom right corner) of this matrix are always 1, because any variable is always perfectly
correlated with itself. Further, since correlations are non-directional, the correlation between
variables V1 and V2 is the same as that between V2 and V1. Hence, the lower triangular matrix
(values below the principal diagonal) is a mirror reflection of the upper triangular matrix
(values above the principal diagonal), and therefore, we often list only the lower triangular
matrix for simplicity. If the correlations involve variables measured using interval scales, then
this specific type of correlations are called Pearson product moment correlations.
Another useful way of presenting bivariate data is cross-tabulation (often abbreviated
to cross-tab, and sometimes called more formally as a contingency table). A cross-tab is a table
that describes the frequency (or percentage) of all combinations of two or more nominal or
categorical variables. As an example, let us assume that we have the following observations of
gender and grade for a sample of 20 students, as shown in Figure 14.3. Gender is a nominal
variable (male/female or M/F), and grade is a categorical variable with three levels (A, B, and
C). A simple cross-tabulation of the data may display the joint distribution of gender and grades
(i.e., how many students of each gender are in each grade category, as a raw frequency count or
as a percentage) in a 2 x 3 matrix. This matrix will help us see if A, B, and C grades are equally
126 | S o c i a l S c i e n c e R e s e a r c h
distributed across male and female students. The cross-tab data in Table 14.3 shows that the
distribution of A grades is biased heavily toward female students: in a sample of 10 male and 10
female students, five female students received the A grade compared to only one male students.
In contrast, the distribution of C grades is biased toward male students: three male students
received a C grade, compared to only one female student. However, the distribution of B grades
was somewhat uniform, with six male students and five female students. The last row and the
last column of this table are called marginal totals because they indicate the totals across each
category and displayed along the margins of the table.
Table 14.2. A hypothetical correlation matrix for eight variables
Table 14.3. Example of cross-tab analysis
Although we can see a distinct pattern of grade distribution between male and female
students in Table 14.3, is this pattern real or “statistically significant”? In other words, do the
above frequency counts differ from that that may be expected from pure chance? To answer
this question, we should compute the expected count of observation in each cell of the 2 x 3
cross-tab matrix. This is done by multiplying the marginal column total and the marginal row
total for each cell and dividing it by the total number of observations. For example, for the
male/A grade cell, expected count = 5 * 10 / 20 = 2.5. In other words, we were expecting 2.5
male students to receive an A grade, but in reality, only one student received the A grade.
Whether this difference between expected and actual count is significant can be tested using a
chi-square test. The chi-square statistic can be computed as the average difference between
Add Your Gadget Here
HIGHLIGHT OF THE WEEK
-
Survey Research Survey research a research method involving the use of standardized questionnaires or interviews to collect data about peop...
-
Inter-rater reliability. Inter-rater reliability, also called inter-observer reliability, is a measure of consistency between two or more i...
-
discriminant validity is exploratory factor analysis. This is a data reduction technique which aggregates a given set of items to a smalle...
-
can estimate parameters of this line, such as its slope and intercept from the GLM. From highschool algebra, recall that straight lines can...
-
Positivist Case Research Exemplar Case research can also be used in a positivist manner to test theories or hypotheses. Such studies are ra...
-
Quantitative Analysis: Descriptive Statistics Numeric data collected in a research project can be analyzed quantitatively using statistical...
-
Probability Sampling Probability sampling is a technique in which every unit in the population has a chance (non-zero probability) of being...
-
Experimental Research Experimental research, often considered to be the “gold standard” in research designs, is one of the most rigorous of...
-
Bivariate Analysis Bivariate analysis examines how two variables are related to each other. The most common bivariate statistic is the biva...
-
Case Research Case research, also called case study, is a method of intensively studying a phenomenon over time within its natural setting ...
Sunday, 13 March 2016
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment