PSYC 381 Public

PSYC 381

Tanner Lewis
Course by Tanner Lewis, updated more than 1 year ago Contributors

Description

Junior Seminar - overview of psychological research concepts, and critical literature review techniques

Module Information

No tags specified
Scientific Method :     Make observations and come up with a question     Hypothesis       look at other research and critically evaluate       essentially an educated guess     Empirically test hypothesis       Psychology uses empirical data to support theories    Revise (if needed) and repeat    communicate findings to further education on the subject       journal articles
Show less
No tags specified
Causality     The primary goal of science is to establish a cause and effect relationships      A causes B to...          A is the independent variable             the variable being manipulated to hopefully result in a cause and effect relationship with the dependent variable          B is the dependent variable              the variable being tested to see the effect of the independent variable    The establish causality an experiment is needed      The Experiment        Control over A ( the independent variable )           We get control by creating different groups to compare against each other             Typically an experimental group and a control group                  both groups are treated the same except one has the independent variable that is manipulated        Control over confounding variables          We get control over the variables that can interfere with the experiment causing false results by random assignment        We measure the dependent/outcome variable of every subject        Random Assignment as a control for A       To control for the independent variable by way of random assignment, we make sure the groups are equal to each other prior to the       treatment.           this allows the researcher to attribute any change to the manipulation of the independent variable    Meanings of control        control over Independent variable       control over the environment        Any other attempt made to eliminate the influence of a threat to a casual inference      Problems with experiments        Can't be used to answer every question       Are not natural       Unable to control for everything            Options other than a tradition experiment        quasi-experiment          no control over independent variable        Correlational study           only looks at the relationship between variables           no cause and effect relationip ( causality )    The importance of this is for Validity       Validity equals to the accuracy of the experiment or study          determines whether the question you asked was correct or not        Two main types:           Internal validity : the degree to which the results are attributed to the independent variable and not some other rival explantion          External validity : the extent to which the results are generalizable
Show less
No tags specified
Reliability An experiment cannot be valid if it is not reliable      reliability is the relative consistency of test scores and other educational and psychological measures          however, no measurement is perfect      ultimately becomes an issue of validity   Reliability of Measures    This refers to whether the measures used actually measure what is being tested consistently       Internal consistency/reliability (Cronbach's alpha)           if the researcher does not provide the Cronbach's alpha that is not good        Poor test-retest performance (correlation)        Low inter-rater reliability (correlation)           multiple raters used          if there are not multiple raters that is not good       Alternate/parallel forms (correlation)       The reliability coefficient ( persons r )    .90+ = excellent  .80-.89 = good .70-.79 = adequate  < .70 = may have limited applicability      Reliability of treatment implementation from participant to participant        Lack of standardization in the study protocol introduces the chance that observed covariation may not be related to treatment          If the groups are treated differently then the study won't be able to determine that the change is attributed to the manipulation of the IV       Any lack of control over test conditions increases the chance that observed covariation may not be related to treatment          If the groups are in different test conditions then the possible change cannot be attributed to the manipulation of the IV          examples:              lack of instructions/ delivery              test conditions not the same    Regression to the mean     Essentially means that things tend to even out over time       extreme scores are rare and usually flatten out after test-retest          This can be problematic when conditions of an experiment are based on the extreme scores             high and low IQ scores      Random heterogeneity of Participants        Individual differences of participants that are related the dependent variable that can cause issues          some participants might be more impacted by the treatment than others because of this           solutions:              use people from same groups ( homogenous)                 i.e. college students, south, north                can be a disadvantage because results can't be generalized             Random assignment           within-subjects design           matching participants
Show less
No tags specified
Construct validity - the extent to which the operational definition measures the characteristics it intends to measure    constructs are abstractions of concepts that are discussed in social and behavioral studies       i.e. social status, power, intelligence     constructs can be measured in many ways       there are several concrete representations of a construct Variables are not constructs       constructs need to be broken down to be measurable Variables need operational definitions    OD - define variables for the purpose of research and replication Any construct can be measured in multiple ways        e.g., power is the construct             variables of power                amount of influence a person has at work, home, and the in the neighborhood                each would need to be measured          all give indications of power but no one represents power   Problems with operational definitions     every observation is affected by other factors that have no relationship to the construct       contains some error other sources of influence      social interaction of interview      interviewers appearance      respondents fear of strangers      assessment of anxiety      vocabulary comprehension      expectations       different understandings of key terms    Nomological Network    the set of construct-to-construct relationships derived from the relevant theory and stated at an abstract theoretical level        basically, what relationships do you expect to see?        typically the starting point for operational definitions   Types of Validity     face validity: extent to which a test is subjectively viewed as covering the concept it says it is measuring       does it look like it tests what it says it does?    content validity: the extent to which the items reflect an adequate sampling of the characteristic        Do the tests cover all aspects that the construct is defined as    criterion validity: the extent to which peoples scores on a measure are correlated with other variables that one would expect them to be         correlated with        two types:           concurrent validity: the extent to which test scores correlate with behavior the test supposedly measures when the construct is measured          at the same time as the criterion. Can also test how well a new test measures against an existing test          Predictive validity: extent to which the test scores predict a future behavior
Show less
No tags specified
Threats to Internal Validity     occurs during the pre-test    exposure to pre-test impacts response to post-test Instrumentation      changes in measure instrument      not with surveys      interview of a focus group or observation           person observing can change/adapt, get more experience             pilot data to check raters training and script             interrater reliability  Statistical regression to the mean (threat to statistical conclusion validity)     extreme score and then get another score is more likely to not get extreme    problematic when using scores to categorize          low IQ = low performer or high IQ= high performer   Selection Bias    result of not doing random assignment     the difference in dependent variable due to nonrandom assignment        problem is with quasi and correlational studies    Selection loss (mortality, attrition)      when the study loses some participants        problematic because there could be something meaningful in the people that dropped out          could also lead to some groups not being equally represented   Experimenter expectancy       experimenter bias         the experimenter creates expectations and treats participants differently   Reactivity: Participant Awareness Bias    people come into the study with their own theories about a particular behavior          they might:               try and make sense of the study              avoid negative evaluations                stupid, naive, embarrassed               please the researcher ( displease researcher)  Diffusion of treatment    The effect observed is due to treatment spreading across groups       e.g., teaching a strategy to one group and participants of that group tell the participants of the control group and the control group begin            to use strategies     Compensatory equalization of treatment       the effect due to compensating control groups          e.g., give money to two schools and only tell one what to do with it and both schools show increase in dependent variable but can't be attribute to independent variable    Compensatory rivalry     the effect due to the control group wanting to prove research wrong ( John Henry effect)        the control group is the underdog        resentful demoralization              the control group wants to retaliate          the control group partakes in hypothesis guessing and can misconstrue the data    Hawthorne Effect    The effect in the dependent variable due to being assigned to the treatment group       changes were observed because changes were being made in the environment       the participant new about the observation and knowing this altered their reaction    Demand characteristics     features that communicate the information of the study (i.e., the hypothesis)     social roles of participants       good, faithful, negativistic, apprehensive    Other considerations                                                   *mundane realism - would this happen in real life    strategies to avoid participant awareness bias:        deception, cover stories ( high psychological realism) (high experimental realism)       double blind design, if not possible:           stay blind until last minute possible          use multiple researchers          make the independent variable and dependent variable hard to see             "accident" or "whoops" maneuver                 - accidentally lost your test results and need to retake after true treatment             confederates                   people part of the experiment not being tested              "multiple study" ruse             measure behavior that is hard to control   Measuring the dependent variable     option 1: behavior observations of participants' responses          typically the first choice          uncommon    option 2: self-report ( liking of another person)                   limited by:                 participants not knowing the answer                may base answers on inaccurate theories                may report what they think is desirable    option 3: behavioriod ( how much labor a participant says they would perform for someone else)                 people might lie or overestimate
Show less
No tags specified
External Validity Process:     step 1: define a target population of individuals, settings, or times    step 2: draw samples from those populations       two types:           representative sample: samples that correspond to a well known population                 very rare          accidental samples or samples of convenience: achieved by a procedure designed to ensure representativeness                may or may not be representative but you might not know   Features of external validity       1. generalizing to particular target individuals , settings, and times                 e.g., does this information hold true for other individuals under the same circumstance        2. generalizing across types of individuals, settings, and times                e.g., does this information hold true for different individuals under different circumstance    Threats to External Validity     1. Interactin of selection and treatment          population is adequately targeted, but the results are only applicable to the participants who show up.                 (volunteers, exhibitionists, hypochondriacs, scientific dogooders)              Solution: Make participation as convenient as possible and attractive to get overall participation     2. Interaction of setting and treatment          this is where the effect of treatment is dependent on the setting                e.g., can you get the same results at a factory as you do at a university            Solution: Vary setting and see if the same patterns show up     3. Interaction of History and treatment           this is when different times results in different results              an example is the cohort effect             e.g., national tragedy          Solution: replication
Show less
No tags specified
Surveys   Common Problems with surveys        unnecessary questions           demographic questions that you should already have the answers to          ask only what is important and relevant           I.e. don't ask where they are from if you are focused on a certain area    Biased/leading questions         "community organizing is hard. Do leadership trainings help you feel prepared for community organizing?"              this is a leading question     Double-barreled or compound questions       "i feel welcomed by staff and other youth at the center"              question about staff and other youth when they could be different answers        race/ethinicty       Double negative ( ambiguous and confusing)           does it seem possible or impossible to you that the Nazi extermination of the Jews never happened?              which is it? possible or not possible     Assuming prior knowledge of understanding     Inadequate response items       categories not exhaustive       categories are not mutually exclusive           response items don't overlap    Rating level inconsistencies       usually not a problem within a measure of a construct, but across longer surveys     Survey length       only ask what you to know right now    Too many open ended questions
Show less
No tags specified
Before asseessing validity    statistical conclusion validity          do the variables actually covary? Covariation       A necessary condition for inferring cause:           no variance, no relationship can be detected           covariation when there is an actual relationship?             GREAT          Covariation when there isn't an actual relationship?              NOT GOOD - type I error          No covariation when there isn't an actual relationship?              GREAT          No covariation when there is an actual relationship?              NOT GOOD - type II error   type I error    false positive  type II error    false negative    How to asses type I and II errors        alpha probability        P <= 0.05 of a type I error Sophisticated answers    assess threats to statistical conclusion validity        low statistical power           1. Is the study sensitive enough to permit reasonable statements about covariation?             2. How much power does a study have to detect a difference when one actually exists?          two functions:              A prior ( i.e., planning a study)                 conduct a power analysis to determine the sample size required for detecting an effect of the desired magnitude (e.g., small, medium,                 large)                 power analysis calculator/ formula             Post-hoc ( i.e., evaluating a study's power)                 most common approach significance testing                    p<= .05                becoming more popular: confidence intervals, or the magnitude of the effect that could have been reasonably detected       Violated assumptions of statistical tests           most common assumptions             equivalent groups in the beginning             normality             equal variances         assumptions vary by statistical test      Fishing       Chances are high that if you test every possible relationship, something will be significant           looking for the change by looking at different angles         increases type I error        what to look for          post-hoc tests presented as a prior hypothesis          multiple tests when a single test would be sufficient         what not to do             present post hoc/exploratory analyses as a prior hypothesis    Reliability of measures    reliability of treatment implementation from participant to participant     random irrelevancies in the test conditions     regression to the mean    Random heterogeneity of participants
Show less
No tags specified
Criterion oriented Validity     General process:        Researcher administers the test, obtains a measure of the criterion on the same subjects and computes a correlation    criterion oriented validity is similar to the idea of nomological network    Convergent: measures of constructs that theoretically should be related to each other are, in fact, observed to be related to each other       testing for convergence across different measures or manipulations of the same thing Divergent/discriminant: measures of constructs that theoretically should not be related to each other are, in fact, observed to not be related to each other       testing for divergence between measures and manipulations of related but conceptually distinct things    the Multi-trait Multi-method matrix     coefficients in the reliability diagonal should consistently be the highest in the matrix of the nomological network        basically, a trait should be more highly correlated with itself than with anything else    coefficients in the validity diagonals should be significantly different from zero and high enough to warrant further investigation   A validity coefficient should be higher than values lying in its column and row in the same heteromethod block  A validity coefficient should be than all coefficients in the heterotrait monomethod triangles The same pattern of trait interrelationship should be seen in all triangles
Show less
Show full summary Hide full summary