Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Construct validity - the extent to which the operational definition measures the characteristics it intends to measure

   constructs are abstractions of concepts that are discussed in social and behavioral studies

      i.e. social status, power, intelligence 

   constructs can be measured in many ways

      there are several concrete representations of a construct

Variables are not constructs 

     constructs need to be broken down to be measurable

Variables need operational definitions

   OD - define variables for the purpose of research and replication

Any construct can be measured in multiple ways 

      e.g., power is the construct

            variables of power

               amount of influence a person has at work, home, and the in the neighborhood

               each would need to be measured

         all give indications of power but no one represents power

 

Problems with operational definitions 

   every observation is affected by other factors that have no relationship to the construct

      contains some error

other sources of influence

     social interaction of interview

     interviewers appearance

     respondents fear of strangers

     assessment of anxiety

     vocabulary comprehension

     expectations 

     different understandings of key terms 

 

Nomological Network

   the set of construct-to-construct relationships derived from the relevant theory and stated at an abstract theoretical level 

      basically, what relationships do you expect to see? 

      typically the starting point for operational definitions

 

Types of Validity 

   face validity: extent to which a test is subjectively viewed as covering the concept it says it is measuring

      does it look like it tests what it says it does?

   content validity: the extent to which the items reflect an adequate sampling of the characteristic 

      Do the tests cover all aspects that the construct is defined as

   criterion validity: the extent to which peoples scores on a measure are correlated with other variables that one would expect them to be         correlated with 

      two types: 

         concurrent validity: the extent to which test scores correlate with behavior the test supposedly measures when the construct is measured          at the same time as the criterion. Can also test how well a new test measures against an existing test

         Predictive validity: extent to which the test scores predict a future behavior 

Threats to Internal Validity 

   occurs during the pre-test

   exposure to pre-test impacts response to post-test

Instrumentation

     changes in measure instrument

     not with surveys

     interview of a focus group or observation 

         person observing can change/adapt, get more experience

            pilot data to check raters training and script

            interrater reliability 

Statistical regression to the mean (threat to statistical conclusion validity) 

   extreme score and then get another score is more likely to not get extreme

   problematic when using scores to categorize 

        low IQ = low performer or high IQ= high performer

 

Selection Bias

   result of not doing random assignment 

   the difference in dependent variable due to nonrandom assignment 

      problem is with quasi and correlational studies 

 

Selection loss (mortality, attrition) 

    when the study loses some participants 

      problematic because there could be something meaningful in the people that dropped out

         could also lead to some groups not being equally represented

 

Experimenter expectancy 

     experimenter bias

        the experimenter creates expectations and treats participants differently

 

Reactivity: Participant Awareness Bias

   people come into the study with their own theories about a particular behavior

         they might: 

             try and make sense of the study

             avoid negative evaluations

               stupid, naive, embarrassed 

             please the researcher ( displease researcher) 

Diffusion of treatment

   The effect observed is due to treatment spreading across groups

      e.g., teaching a strategy to one group and participants of that group tell the participants of the control group and the control group begin            to use strategies

   

Compensatory equalization of treatment 

     the effect due to compensating control groups

         e.g., give money to two schools and only tell one what to do with it and both schools show increase in dependent variable but can't be attribute to independent variable 

 

Compensatory rivalry 

   the effect due to the control group wanting to prove research wrong ( John Henry effect) 

      the control group is the underdog 

      resentful demoralization    

         the control group wants to retaliate

         the control group partakes in hypothesis guessing and can misconstrue the data 

 

Hawthorne Effect

   The effect in the dependent variable due to being assigned to the treatment group

      changes were observed because changes were being made in the environment

      the participant new about the observation and knowing this altered their reaction 

 

Demand characteristics 

   features that communicate the information of the study (i.e., the hypothesis) 

   social roles of participants

      good, faithful, negativistic, apprehensive 

 

Other considerations                                                   *mundane realism - would this happen in real life

   strategies to avoid participant awareness bias: 

      deception, cover stories ( high psychological realism) (high experimental realism)

      double blind design, if not possible: 

         stay blind until last minute possible

         use multiple researchers

         make the independent variable and dependent variable hard to see

            "accident" or "whoops" maneuver 

               - accidentally lost your test results and need to retake after true treatment

            confederates

                  people part of the experiment not being tested 

            "multiple study" ruse

            measure behavior that is hard to control

 

Measuring the dependent variable 

   option 1: behavior observations of participants' responses

         typically the first choice

         uncommon

   option 2: self-report ( liking of another person)       

           limited by: 

               participants not knowing the answer

               may base answers on inaccurate theories

               may report what they think is desirable

   option 3: behavioriod ( how much labor a participant says they would perform for someone else) 

               people might lie or overestimate

Untitled

Tanner Lewis
Module by Tanner Lewis, updated more than 1 year ago
No tags specified