webinar register page

Webinar banner
2020 LSK Virtual Colloquium Series 5: Experimental Syntax
Title: Crowd-sourced acceptability judgments: The need to ask "Why?"

Speaker: Carson Schütze (UCLA)

The starting point for this talk is the observation, by now well-known, that when linguists seek to verify their acceptability judgments with large numbers of naive speakers via crowd-sourcing platforms such as Amazon Mechanical Turk, a minority of these judgments (ranging from 5% to 20% or more) will fail to replicate. Although many have been quick to draw conclusions from such results (in various directions), I argue that attempting to do so does not make sense until we ask and answer the question "Why?"—Why are subjects giving the responses they are giving? Using interviews with naive subjects after they have completed computer-based judgment tasks, I demonstrate that there are a large range of reasons why they give low ratings to sentences that linguists have considered highly acceptable and vice versa. Many of these reasons are not indicative of genuine differences in what the two populations consider (un)acceptable, but are essentially task artifacts. I propose strategies for reducing these artifacts and thus collecting data that more closely reflects linguists' intended object of study: the subject's grammar.

Dec 18, 2020 10:00 AM in Seoul

Webinar logo
Webinar is over, you cannot register now. If you have any questions, please contact Webinar host: LSK staff.