Crowd-voting mechanisms are commonly used to implement scalable evaluations of crowdsourced creative submissions. Unfortunately, the use of crowd-voting also raises the potential for gaming and manipulation. Manipulation is problematic because i) submitters’ motivation depends on their belief that the system is meritocratic, and ii) manipulated feedback may undermine learning, as submitters seek to learn from received evaluations and those of peers. In this work, we consider a design approach to addressing the issue, focusing on the notion of strategic opacity, i.e., purposefully obfuscating evaluation procedures. On the one hand, opacity may reduce the incentive and thus prevalence of vote manipulation, and submitters may instead dedicate that time and effort to improving their submission quantity or quality. On the other hand, because opacity makes it difficult for submitters to discern the returns to legitimate effort, submitters may also reduce their submission effort, or simply exit the market. We explore this tension via a multi-method study employing field experiments at 99designs and a controlled experiment on Amazon Mechanical Turk. We observe consistent results across all experiments: opacity leads to reductions in gaming in these crowdsourcing contests, and significant increases in the allocation of effort toward legitimate versus illegitimate activities, with no discernible influence on contest participation. We discuss boundary conditions and the implications for contest organizers and contest platform operators.
Details