Crowdsourcing: the pros, the cons and the possibilities

By Marlinde Venema

The term ‘crowdsourcing’ has become increasingly more popular in academic and mainstream media, because more and more businesses seem to take to the idea. The concept of crowdsourcing, that holds that commercial enterprises and scientific institutes turn to the public for (online) input in their ventures, has been a source of interest as well as concern. While some are raving about this new phenomenon because of its seemingly democratic nature, others are worried that crowdsourcing might just mean the end of ‘the professional’. But where there is risk, there is opportunity. What could historians take from interacting with the public?

The myth of amateur crowds

Darren C. Brahbam, a university professor who specializes in public relations and new media, talks about the imaging of crowdsourcing in press articles in his paper ‘The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage’ and in his book Crowdsourcing.[1] The author focusses on the use of the word ‘amateur’ in relation tot crowdsourcing. Brabham has a serious issue with people who participate in crowdsourcing ventures being referred to as amateurs. He claims that the distinction so often made between amateurs and professionals is not quite so obvious. A lot of the time those so-called amateurs are actually well trained and certified professionals who for whatever reason are not part of what Brabham calls ‘the dominant economic system’. For example: someone might want to build up a resume or wants to use crowdsourcing as a means to get a job. Being an amateur does not automatically mean being incompetent.

This ‘myth of the amateur crowds’ is standing in the way of a fair look at the work that these often underpaid ‘amateurs’ do and the contributions that they make, says Brabham. After all, the companies that they work for are profiting greatly from this arrangement. Instead of being condescending, we should want to include these ‘amateur crowds’ in the professional system, because they are valuable assets and therefore deserve equal workers rights and equal pay.

Research method

The research that has led Brabham to his conclusions, was done by following the critical discourse analysis-method (CDA). This hermeneutic approach was applied to one hundred press articles from the LexisNexis-database. Brabham based the selection of those articles on the result of a search with two keywords: ‘crowdsourcing’ and ‘amateur’. In his paper ‘The myth of amateur crowds’ Brabham more or less explains how the method works although he doesn’t make it clear exactly how his research was performed. He also neglects to clarify why he did his search in LexisNexis in the first place and how he narrowed down his search. Occasionally Brabham gives an example of the context in which the keywords are used in certain articles, but the general framework of his research remains a bit vague. However, his in depth conclusions about the imaging of crowdsourcing are very well thought out. 

No more amateurs

The main point – that we should stop referring to people of crowdsourcing ventures as amateurs – seems to be valid. The difference between a professional and an amateur can not always be distinguished so easily. However, that conclusion is not as shocking as Brabham makes it out to be. It actually makes a lot of sense that people who are part of a crowdsourcing event have a certain kind of expertise. It would for instance be irregular for a historian to respond to a crowdsourcing project that asks people to design a new Ikea-closet. Presumably people with some kind of carpeting experience or architectural training would be the first to accept this challenge. So even though Brabham makes a valid point, he did not invent the wheel.


Crowdsourcing can be a great asset to science as well. This is proven by the website PubPeer that was founded in 2012 and is meant to serve as an alternative to the traditional scientific peer-review system. On this online platform scientists and ‘amateur’ scientists have the opportunity to give feedback on articles that are published in academic journals. According to the founders of the site the traditional way of critiquing a research article is slow and depends on the willingness of academics to look at each other’s work. Because of the flaws of this system, scientific research is vulnerable to mistakes and fraud. So, they reasoned, if more people can pitch in, the risks of those things happening are reduced. The life sciences receive the most comments from PubPeer users, because it is easier to spot a visable mistake within the hard sciences than it is with a humanities piece for example. But this sort of crowdsourcing idea could be very interesting to experiment with for historians. Especially because historical research is so often about exploring various explanations for historical events. So why not let people weigh in on the debates?

[1] Daren C. Brabham, “The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage” Information, Communication & Society Vol. 15, No. 3, April 2012, pp. 394–410.