Surveys
Common Problems with surveys
unnecessary questions
demographic questions that you should already have the answers to
ask only what is important and relevant
I.e. don't ask where they are from if you are focused on a certain area
Biased/leading questions
"community organizing is hard. Do leadership trainings help you feel prepared for community organizing?"
this is a leading question
Double-barreled or compound questions
"i feel welcomed by staff and other youth at the center"
question about staff and other youth when they could be different answers
race/ethinicty
Double negative ( ambiguous and confusing)
does it seem possible or impossible to you that the Nazi extermination of the Jews never happened?
which is it? possible or not possible
Assuming prior knowledge of understanding
Inadequate response items
categories not exhaustive
categories are not mutually exclusive
response items don't overlap
Rating level inconsistencies
usually not a problem within a measure of a construct, but across longer surveys
Survey length
only ask what you to know right now
Too many open ended questions
Before asseessing validity
statistical conclusion validity
do the variables actually covary?
Covariation
A necessary condition for inferring cause:
no variance, no relationship can be detected
covariation when there is an actual relationship?
GREAT
Covariation when there isn't an actual relationship?
NOT GOOD - type I error
No covariation when there isn't an actual relationship?
GREAT
No covariation when there is an actual relationship?
NOT GOOD - type II error
type I error
false positive
type II error
false negative
How to asses type I and II errors
alpha probability
P <= 0.05 of a type I error
Sophisticated answers
assess threats to statistical conclusion validity
low statistical power
1. Is the study sensitive enough to permit reasonable statements about covariation?
2. How much power does a study have to detect a difference when one actually exists?
two functions:
A prior ( i.e., planning a study)
conduct a power analysis to determine the sample size required for detecting an effect of the desired magnitude (e.g., small, medium, large)
power analysis calculator/ formula
Post-hoc ( i.e., evaluating a study's power)
most common approach significance testing
p<= .05
becoming more popular: confidence intervals, or the magnitude of the effect that could have been reasonably detected
Violated assumptions of statistical tests
most common assumptions
equivalent groups in the beginning
normality
equal variances
assumptions vary by statistical test
Fishing
Chances are high that if you test every possible relationship, something will be significant
looking for the change by looking at different angles
increases type I error
what to look for
post-hoc tests presented as a prior hypothesis
multiple tests when a single test would be sufficient
what not to do
present post hoc/exploratory analyses as a prior hypothesis
Reliability of measures
reliability of treatment implementation from participant to participant
random irrelevancies in the test conditions
regression to the mean
Random heterogeneity of participants