Förderjahr 2020 / Stipendien Call #15 / ProjektID: 5023 / Projekt: Sabotage in crowdsourcing
Crowdsourcing contests can lead to enormous amounts of contributions. This article discusses how the crowd can help to solve this problem.
During the devastating Deepwater Horizon accident, British Petroleum (BP) reached outside company boundaries for suggestions from external contributors. More specifically, they started a crowdsourcing contest and asked the general public what could be done to deal with the catastrophe. BP hoped that external contributors would find creative solutions to the pressing problem of the oil spill.
The result of their approach was astonishing: A crowd of external contributors from a vast array of knowledge domains responded to the open call and provided over 120,000 suggestions. Therefore, BP received one of the largest pools of external suggestions ever documented.
However, the success of the idea crowdsourcing resulted in a new problem: How and by whom should the submitted ideas be evaluated?
The evaluation of such large amounts of contributions requires a lot of resources if traditional expert panels are used for this purpose (Criscuolo et al., 2017). For example, IBM employed 50 senior executives for several weeks in order to evaluate over 50,000 ideas that were submitted during their “Innovation Jams” (Bjelland & Wood, 2008). Similarly, it took Google 3,000 employees and “much longer than expected” to evaluate over 150,000 submissions of their 10^100 project (Google, 2009). Estimates suggest that it takes about 500$ and four hours to evaluate one idea in a Fortune 100 company (Klein & Garcia, 2015). In the case of Google this estimation would result in costs of over 60,000,000$ for the evaluation of the contributions alone.
One possible solution to the problem of finding the best idea is the crowd itself. Outsourcing the screening process to the crowd is usually referred to as crowd evaluation or crowd voting. Current research findings indicate that crowds can indeed compete with experts when evaluating ideas. For example, Magnussen et al. (2016) show “that companies can employ users during the initial screening process using criteria assessment to select the best ideas for further elaboration, something that would significantly reduce the number of ideas”.
If applied properly, crowd evaluations seem to be a promising method for efficient evaluation tasks like idea screening. However, crowd evaluations are not just used for idea assessment. A much more known form of crowd evaluations are consumer reviews on the internet – a topic, which I will touch upon in my next blog post.
References:
Bjelland, O. M., & Wood, R. C. (2008). An inside view of IBM’s’ Innovation Jam’. MIT Sloan Management Review, 50(1), 32.
Criscuolo, P., Dahlander, L., Grohsjean, T., & Salter, A. (2017). Evaluating Novelty: The Role of Panels in the Selection of R&D Projects. Academy of Management Journal, 60(2), 433–460. https://doi.org/10.5465/amj.2014.0861
Google. (2009, September 24). Announcing Project 10^100 idea themes. Official Google Blog. https://googleblog.blogspot.com/2009/09/announcing-project-10100-idea-t…
Klein, M., & Garcia, A. C. B. (2015). High-speed idea filtering with the bag of lemons. Decision Support Systems, 78, 39–50. https://doi.org/10.1016/j.dss.2015.06.005
Magnusson, P. R., Wästlund, E., & Netz, J. (2016). Exploring Users’ Appropriateness as a Proxy for Experts When Screening New Product/Service Ideas: Exploring Users as a Proxy for Expert Judges. Journal of Product Innovation Management, 33(1), 4–18. https://doi.org/10.1111/jpim.12251