Mturk for subject pools

Erik Voeten at Monkey Cage recently reviewed this paper by Kimmo Eriksson in JDM:

To demonstrate this I conducted an online experiment with 200 participants, all of which had experience of reading research reports and a postgraduate degree (in any subject). Participants were presented with the abstracts from two published papers (one in evolutionary anthropology and one in sociology). Based on these abstracts, participants were asked to judge the quality of the research. Either one or the other of the two abstracts was manipulated through the inclusion of an extra sentence taken from a completely unrelated paper and presenting an equation that made no sense in the context. The abstract that included the meaningless mathematics tended to be judged of higher quality. However, this “nonsense math effect” was not found among participants with degrees in mathematics, science, technology or medicine.

But the efficiency of the sampling design is what I find interesting:

The study demanded comparable sets of participants from different academic disciplines, all with experience of reading research reports. To find such participants I used Amazon’s Mechanical Turk (mturk.com). This is an online labor market with many thousands of users of varying backgrounds who will do tasks for small monetary compensation. The usefulness of Mturk for online experiments is well documented (Paolacci et al., 2010). I advertised a task of judging the quality of research from abstracts, and asked for users with a postgraduate degree and experience of reading research reports. A fee of $0.50 was offered for approximately five minutes work.