Jablon Flashcards - INFERENTIAL STATISTICS

Description

Flashcards on Jablon Flashcards - INFERENTIAL STATISTICS, created by anahitazk on 13/07/2014.
anahitazk
Flashcards by anahitazk, updated more than 1 year ago
anahitazk
Created by anahitazk almost 10 years ago
49
1

Resource summary

Question Answer
inferential statistics all about testing hypotheses --tests allow us to make inferences about population, even though we only collect data from a sample
sample vs. population population = everybody, e.g., every depressed person sample = subset of population, e.g., depressed people you collect data from for your tx study
randomly selected subjects an assumption that needs to be met in order to make inferences about the population, i.e., subjects need to be randomly selected to make inferences about population
parameters measures related to the POPULATION, i.e., every person is included in the statistic common parameters *mu (μ) = population mean *sigma (σ) = population standard deviation
common sample statistics x-bar (x̄) = sample mean S or SD = sample standard deviation
problem with sampling typically, whenever we sample there is some degree of sampling error i.e., sample is not PERFECTLY representative of the population; it's a little bit off sampling error causes sample mean to be slightly different than population mean
how do we know there is sample error? e.g., population mean for IQ = 100 if we randomly select 25 ppl from population and measure IQ, x-bar is usually a little higher or lower than population mean
standard error of the mean avg amount of error in the group mean when you sample subjects from the population. --the fact that the means across groups have a certain amount of spread, that spread is error --avg spread is called standard error of the mean
calculation of standard error of the mean *don't actually have to collect data of multiple groups to get the value* = Sigma/sqrt(N) where, sigma = population mean N = sample size (in 1 group/sample) e.g., standard error of IQ scores, w/groups of 25 ...SD = 15 ...group size = 25 ...15/sqrt(25) = 3 *tells us if we take sample of 25 ppl, measure them on IQ, then, the avg amount of error in group's mean will be about 3pts away from population mean >>so, if plotting the bell curve of means across samples, the midpoint/mean would be 100, -1SD would be 97, +1SD would be 103, etc
Full Example/Explanation of Standard Error of the Mean if we went into population and took samples of equal size -- group size, n = 25 (randomly selected) >>collect a mean for each group >>let's say 1,000 groups of 25 = 1,000 diff means on IQ score, for example >>if we plot ALL means, they will end up being normally distributed (MIRACULOUSLY!), a bell-shape distribution >>IF NO SAMPLING ERROR, mean of every group would equal the mean of the population, e.g., 100 for an IQ score >>due to sampling error, means end up being spread on bell-shape distribution. >>avg amount of error = SD of the curve
central limit theorem if we take infinite number of samples from population (of equal size), and we plot means, we'll get a NORMAL distribution --even if the population isn't normal; the distribution of the MEANS will be normal >>the mean of all the means (i.e., grand mean) in your normal distribution will be the mean of the population >>and SD of means will always be the SD of the population divided by the square root of the sample size, i.e., standard error of the mean (hypothetical, theoretical construct)
research utlity of central limit theorem tells you how likely it is that any particular group mean will come about by chance (or due to random selection issues) i.e., did your predictor/treatment really make a difference or was it just by chance that you got a lesser or greater mean?
to decrease stand. error of the mean, what happens to standard deviation of the population and sample size? *standard deviation of the population decreases and/or *sample size increases
hypothesis testing your hypothesis is your statement of belief about what is occurring in the POPULATION --thus, statements of hypotheses are always in terms of the population >>however, hypotheses are tested using SAMPLES
types of hypotheses *hypotheses about relationships --that two things are or are not related (e.g., relationship between education and income) *hypotheses about group differences --look at differences between two groups to see if one is better than another --e.g., is medication A better than medication B?
null hypothesis (H-subzero) no differences in your group or relationship among your variables of interest --this is a hypothesis we are hoping to REJECT e.g., drug A and drug B = 0 >> H0: mu(1) = mu(2)
alternative hypothesis (opposite of null) --what you are hoping to find e.g., that two means are not equal or that one group is greater than another group
regions in bell shape curve 1. regions of likely values --where we reject null 2. regions of unlikely values --where we accept null; differences not significant and/or based on chance
region of unlikely values (AKA alpha) --where you are unlikely to end up (with those values/scores) by chance >>tail ends of bell curve --when value falls here we REJECT NULL hypothesis --a predetermined value; region of unlikely values will always be a certain percentage under the curve 3 possibilities: .05 = 5% of curve .01 = 1% of curve .001 = .1% of curve
region of likely values --when value falls here, we accept/retain the null hypothesis aka - rejection region
4 possible outcomes from hypothesis testing 1) we could reject null hypothesis and be correct 2) we could reject null hypothesis and be wrong (Type I error) 3) we could accept null hypothesis and be right 4) we could accept null hypothesis and be wrong
what causes our conclusions (from hypothesis testing) to be wrong? chance factors >>REPLICATION is how we find out whether we made a correct or incorrect decision
Type I error (aka alpha) --when you originally reject null hypothesis --but upon replication, no significant differences are found >>CHANCE of Type I error is same as "unlikely" or alpha region --e.g., if alpha is .05, you have 5% chance of making Type I error --if alpha is .001, you have smallest chance of type I error, but biggest chance of type II error *incorrect rejection*
Type II error (aka beta) --when you originally accept the null (i.e., no significance) --but upon replication, there is significance (i.e., chance brings down your scores and tx was really effective) *incorrectly accept*
Rule of the "C's" incorrectly reject = 1 C = Type I error incorrectly accept = 2 C's = Type II error
Power --ability to correctly reject the null hypothesis --reject the null when there really IS statistical significance/differences *correct rejection*
what factors affect power? --the larger your sample, the more power your research has, i.e., greater chance you will find significance --the stronger your intervention, the more power e.g, intervene for 1mo rather than 1 day, you'll have greater likelihood of significance --the less error, the more power --overall, parametric tests (e.g., t-test, ANOVAs) are more powerful --one-tailed tests are more powerful
relationship between alpha and beta INVERSE! when alpha is lower, beta is higher i.e., when your chances of Type I error are lower, your chances of Type II error are higher
relationship between power and Beta INVERSE! formula: Power = 1 - Beta *the higher the beta, the lower your power *the lower the beta, the higher your power
3 broad categories of statistical tests 1. Tests of differences 2. Tests of relationship and prediction 3. Tests related to structure or fit
tests of differences --looking at diff's between groups --e.g., t-tests, ANOVA's, Chi-Squares
Tests of relationship and prediction --asking, are 2 variables related? >>relationship: e.g., pearson correlations, biserial >>prediction: e.g., multiple regression, canonical regression
tests related to structure or fit --gather large amount of data to ask about underlying structure/fit, how things fit together --e.g., factor analysis
determining which *test of difference* to use --when testing group differences, 1) how many DVs and what TYPE of data? (i.e., nominal, ordinal, interval, ratio) 2) how many IV's (i.e., what is being compared)? and how many levels for each IV? 3) is data independent or correlated?
assumptions that must be met to use PARAMETRIC tests -interval/ratio data -homoscedasticity (all variables have same variance) -normal distribution of data
homoscedasticity same spread, variance, standard deviation among variables --needed in order to run parametric tests
study of depression and medication treatment with 3 ethnic groups -how many variables? -how many levels? --2 variables *treatment outcome = medication or not *grouping variable = 3 groups of ethnicity
DETERMINE WHICH TEST OF DIFFERENCE TO USE FOR: two programs designed to improve academic achievement --programs are compared by measuring students at 1) beginning, 2) middle and 3) end of school year questions to ask... 1) Data? Interval THUS - use parametric test 2) # of DV's? 1 (since they don't say more than 1) 3) # of IV's? 2 --comparing academic achievement (2 levels) and --performance over time (3 levels) 4) is IV data independent or correlated in 1st IV? programs are independent, time is a correlated variable
Independent Variables (IVs) and tests of difference IVs are the way your groups are being compared, what they're being compared ON - e.g., medication outcomes, achievement performance, etc. IVs have LEVELS = number of groups being compared e.g., if you are comparing pre-treatment, mid-treatment, and post-treatment outcomes of depression, time is an IV with 3 levels e.g., if you are comparing medication and psychotherapy for depressed people across 3 ethic groups, treatment is an IV with 2 levels and ethnicity is an IV with 3 levels
Type of Data and Tests of Difference >>nominal/ordinal data --use nonparametric test --e.g., Chi-Square >>interval/ratio data --use parametric test --either some form of t-test or of ANOVA
Independent vs Correlated IVs and Tests of Difference Independent data = can only be in one group at one time >>occurs when there is: --random assignment to groups --assignment based on preexisting characteristics (e.g., diagnosis, age, race, gender, etc.) Correlated data = participants data can be in more than one level of IV >>most common occurrence: measuring people over time, i.e., time is IV and number of data collections us # of levels >>also occurs when: people in group 1 and group 2 are related to each other (e.g., twins, married couples) >>lastly, occurs when: participants are matched in pairs BEFORE they are assigned to groups, e.g., matched based on income
tests for nominal data chi-square OR multiple sample chi-square --in single-sample chi-square, examining 1 IV > e.g., voter preference --in multiple-sample chi-square, examining more than one IV > e.g., voter preference, gender (2 IV's)
Chi-Square Assumptions KEY REQ'T - independent observations i.e., NON-correlated data --can't measure people before and after and use a chi-square test
tests for interval data t-tests or ANOVAs -use ANOVAs for 3+ groups
tests for ordinal data t-test for single samples --one group, one name, single vodka = Smirnoff t-test for independent samples --two groups, double vodka, i.e., two names = Calmagoroff-Smirnoff Mann Whitney U-test - for independent data Wilcoxon Matched-Pairs Signed-Ranks test -for matched samples -remember "oxen," like how oxen are matched/yoked together
Tests for ratio data asdf
what's the *only* test we could use for more than one DV in testing group differences? MANOVA
what test can you conduct when only examining ONE group? t-test for single sample = one group *can only use this when you have some known population value you will compare your sample to (very rare)
what test(s) do you use for: interval data, 1 IV, 2 levels? t-test for independent samples (when data independent) OR t-test for matched samples (when data is correlated)
what group of tests must you use when IVs have 3+ levels? ANOVAs --can no longer use t-tests
what test to run for 1 IV with 3 or more levels? one-way ANOVA --if you randomly assign ppl to 3 tx groups, data independent --one-way repeated measures ANOVA = one gp of ppl who you measure multiple times; time is only variable
what test to run for 2 IV with 3 or more levels? two-way ANOVA --if both var's independent, two-way ANOVA (aka factorial anova) e.g., gender and treatment group --if variable is independent for 1 variable and correlated for the other (e.g., treatment groups and time) = mixed ANOVA or split-plot *likely to see this* --if you have 2 IVs where both are correlated = repeated measures factorial anova or blocked ANOVA *less likely* >>blocking data--when you create groups from a continuous variable
what test to run for 3 IV with 3 or more levels? three-way anova
differences among *t-test *one-way ANOVA *two-way ANOVA *split plot t-test = only 1 IV, on 2 groups one-way ANOVA = only 1 IV, 3 or more groups two-way ANOVA = only 2 IVs, 3 or more groups, both IVs independent split plot = only 2 IVs, 3 or more groups, data correlated for one IV and independent for other IV
covariate potential confound in research study --third variable that you aren't necessarily interested in studying but is affecting your outcome --thus you want to "remove it"
what ANOVA do you use when you have a covariate? ANCOVA - analysis of covariate --to remove confounding/covariate variable
degrees of freedom -- single-sample Chi Square --nominal data --one variable (e.g., voter preference) df = number of levels - 1 *levels = number of groups in variable
degrees of freedom -- multiple-sample Chi Squarea --nominal data --more than one variable (e.g., voter preference & gender) df = (number of levels in IV1 - 1) * (number of levels in IV2 - 1) *levels = number of groups in variable
degrees of freedom -- single-sample T-Test df = N - 1 *N - number of subjects
degrees of freedom -- matched-samples T-Test df = # of pairs - 1
degrees of freedom -- independent samples T-Test --two groups --data independent df = N - 2 *N is number of subjects
degrees of freedom -- One-Way ANOVA three possibilities 1) df TOTAL = N -1 2) df BETWEEN groups = # of groups - 1 3) df WITHIN groups = df TOTAL - df BETWEEN
expected frequencies in a Chi-Square --survey ppl &-get scores for each of categories = obtained frequencies --we want to see if there are sig diff in obtained freq across groups >>test will calculate difference between obtained freq and expected freq to calculate = total number of ppl sampled / number of cells *where cells is number of categories in the one variable you're examining --e.g., voter preference across D, R, independent = 3 cells
disadvantage of running multiple tests as opposed to one test e.g., multiple t-tests to compare one variable across more than 2 groups vs. one-way ANOVA **increases Type I/alpha error** -every time you run the test, you increase your chances of Type I error -if you run a test 10 times that has a 5% of Type I Error each time (i.e., alpha = .05), you essentially have 50% chance, system wide of Type I error
F-ratio numerical value that is output of ANOVA -ratio of msbg / mswg -ms = mean square (i.e., average) avg variability between groups (msbg) / avg variability within groups (mswg) *variability BETWEEN groups means there are differences - this is usually good for our research! *variability WITHIN groups means there is error - this is bad!
output from two-way ANOVA 3 F-ratios 1) main effect for IV-1 --e.g., treatment 2) main effect for IV-2 --e.g., gender 3) interaction effect *any of these being significant is possible* -if you do get interaction and main effect, you interpret interaction effect FIRST -cautiously interpret main effect in light of interaction effects
trend analysis extension of ANOVA --when it is significant, you can run trend analysis if IV has some sort of quantity (like dose of medication) --trend analysis will tell you the trend of the data, e.g., steadily increasing/decreasing, inverted-U
test of relationship or predictions asks if there is a relationship between 2 or more variables, e.g., amount of time someone studies for EPPP and their score --one large group of people who we measure on all variables of interest (not separate groups necessarily) 2 large categories --bivariate --multivariate
predictor variable x what you used to make prediction from
outcome variable y criterion; thing you are trying to predict *to*
correlation (bivariate test of relationship) --no real IV or DV, here >> value ranges from -1.0 to 1.0 (tells strength and direction between relation) --positive value = direct relationship = increase or decrease simultaneously --negative value = indirect relationship = increase or decrease in opposite directions *0 = no relation *|1| = perfect relationship; perfect prediction when plotted, relations closer to 0 will have broader Y-spread; when closer to 1, the points will be closer together
coefficient of determination correlation coefficient, squared --determines shared/explained/accounted-for variability --i.e., certain % of outcome (or DV) that can be explained by predictor (or IV)
regression equation allows you to predict from one variable to another --regression equation is LINE OF BEST FIT >> the line that best fits through your individual data points (if plotted) Y = a + bX thus, can plug in X and predict Y, from coefficients that computer yields >> determines best-fit line using *least squares criterion* pearson r correlation (?)
least squares criterion how regression process determines line of best fit --distance between plotted point and potential regression line - distance gets squared - line with least of all possible distance ends up being the regression equation
assumptions of bivariate correlations 1) linear relationship between X and Y 2) homoscedasticity (homog of variance) 3) unrestricted range of scores on X and on Y --people you're sampling should go from low to hi on both; if you restrict range, you auto'ly restrict correlation --i.e., subjects should be diverse; if everyone is the same, you will "wipe out" correlation *curvilinear relationship between X and Y only allowed by 1 stats test: Eta --eta for curvilinear data--
determining which *test of relationship* to use type of data (e.g., nominal, ordinal) --look at both X and Y variables --Pearson r correlation requires both X and Y to be interval or ratio data --ordinal data has 2 options: Spearman's row & Kendal's tau --when one variable is interval/ratio and other is dichotomous (i.e., nominal): biserial or point biserial (e.g., gender and income) *true dichotomy is naturally occurring, e.g., gender = point biserial *artificial dichotomy exists when we split a continuous variable into groups, e.g., depressed vs non-depressed = biserial **curvilinear relationship between X and Y: eta, e.g., arousal/anxiety and performance (Yerkes–Dodson law)
types of correlations *zero-order (X and Y are correlated and nothing else affecting the relationship) *partial correlation (X and Y are correlated; also, Z - third variable affecting relationship between X & Y; allows you to statistically remove effect of Z from BOTH var's) *semi-partial or part correlation (removing effect of 3rd variable from only one of two variables; you believe it affects one variable, not both of them) *partial and part correlations are akin to ANCOVA
moderator variable affects strength of relationship between predictor (IV) and criterion (DV) --association/strength, etc will vary across levels of the third variable
mediator variable affect EXPLAINS the relation between X and Y --correlation between X and Y no longer exists with presence of third variable
multivariate tests more than one X & either one Y or more than one Y *i.e., at least two predictors and 1+ dependent variables
multiple R correlation correlation between two or more X's and a single Y --yield correlation coefficient --if we square it, becomes coefficient of multiple determination *akin to Pearson's r and coefficient of determination
multiple regression equation allows prediction from 2 or more predictors (X's) and 1 criterion (Y) Y = a + b1X1 + b2X2 ... CONCERNS: *multicollinearity - happens when predictors are highly correlated with each other; overlap a great deal IDEALLY--predictors have low relationship with each other and moderate-to-strong relationship with outcome *COMPENSATORY* - one predictor can compensate for another and predict "good performance" for Y --example of noncompensatory relationship: multiple cut-off 2 approaches: stepwise regression hierarchical regression
stepwise regression computer decides when to put each predictor variable in equation, depending on strength of association with criterion e.g., looking at 10 different variables, it will put them in certain order, beginning with strongest associations
hierarchical regression independent variables are entered according to researcher's theoretical/conceptual rationale for the order of predictors
canonical R (correlation) --extension of Multiple R (corr between 2+ X's and 1Y) --this is relation between... 2 or more IVs & 2 or more DVs
discriminant function analysis predicting group membership or some nominal outcome --e.g., trying to predict which group someone will fall in (special case of multiple regression equation)
log linear (or logit) analysis looks like discriminant analysis --predicting nominal outcome variable AND --independent variables are nominal
structural equation modelling allows us to test causal relationships --correlational methods to test causal relationships WITHOUT manipulating variables most common: LISREL
test of structure takes a bunch of data and examines for presence of some underlying structure --how are the things we're measuring (e.g., items and subtest) coming together: independent? similar?
factor analysis (test of structure) --data is score on subtest or test items --analysis will see how many and what are the significant factors underlying your construct of interest --yields list of factors that are significant --1st factor is always strongest, that best explains what's going on
eigenvalue aka characteristic root mathematical # that tells you strength of factor
correlation matrix e.g., table of all subtests on the WAIS and how they are correlated with each other
factor loadings correlations between subtests and a particular factor
two different types of rotations in factor analysis ***orthogonal*** --end up with factors that are UNCORRELATED, no relationship w/each other --allows you to calculate communality (amount of variability explained by sum of all factors); square the factors and add & ***oblique*** --factors ARE correlated, e.g., WAIS and WISC composites
factor analysis vs. principle components analysis in principle comp analysis - researcher has no underlying idea or expectation for factors
cluster analysis looking for subgroups of individuals --rather than data EX: take MMPI data from police officers and find 3 commonly-occurring profiles
Show full summary Hide full summary

Similar

Study Planner
indibharat
GCSE Biology heart notes
Kamila Woloszyn
Cells - Biology AQA B2.1.1
benadyl10
| GCSE Busniness Studies | AQA | Key Terms | "Starting A Business" |
Spuddylicious
Personality disorders
Anna Walker
GCSE REVISION TIMETABLE
Joana Santos9567
chemistry: c2
kristy baker
PSBD TEST # 3
yog thapa
Penson
Roslyn Penson
Psychology Key Words Research Methods
Alfie Moorhead
1PR101 2.test - Část 18.
Nikola Truong