Running field experiments using Facebook split test
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Business researchers use experimental methods extensively due to their high internal validity. However, controlled laboratory and crowdsourcing settings often introduce issues of artificiality, data contamination, and low managerial relevance of the dependent variables. Field experiments can overcome these issues but are traditionally time- and resource-consuming. This primer presents an alternative experimental setting to conduct online field experiments in a time- and cost-effective way. It does so by introducing the Facebook A/B split test functionality, which allows for random assignment of manipulated variables embedded in ecologically-valid stimuli. We compare and contrast this method against laboratory settings and Amazon Mechanical Turk in terms of design flexibility, managerial relevance, data quality control, and sample representativeness. We then provide an empirical demonstration of how to set up, pre-test, run, and analyze FBST experiments.