In the third of this occasional series, I aim to review a recent polemic piece of mine from last year published in Convergence.
I have discussed this article several times on this blog, and the reason for doing so is simple: it's really important. More on that later.
Now, normally in this series I expand upon the methodology/tests conducted etc. to provide a more comprehensive insight into the mechanics of music piracy research. In this instance however, the methodology was pretty straightforward - it was a really comprehensive review of literature. Though not particularly exciting, it is worth mentioning exactly how this goes.
In effect, I spent years searching for and reviewing published work on music piracy in academic journals. Beginning with keywords like 'music piracy' and 'file sharing', I subsequently added more to focus in on particular areas of interest. Over time, I found myself with a large list of key authors and journals which published work in these areas; this guided more searches.
It was time consuming, but worthwhile.
It quickly became clear that the research methods used to explore music piracy were varied and often quite weak, by my standards. This is the focus of the article under review in this blog entry.
In the paper, which was itself a follow-up to a special issue of Convergence on digital piracy, I reviewed the conventional approaches used to measure music piracy and critiqued them by focusing on issues such as sampling, for example. Additionally, I defined some potential future methodologies which would benefit the literature, given the gaps observed from my literature review.
At a glance, the paper provides a nice summary of how music piracy research is often conducted.
It is one of the aims of this modest blog to provide you, dear reader, with information on how researchers arrive at their conclusions - not just discuss their conclusions. And with that, I urge you to give it a read.
Over and out.
I have discussed this article several times on this blog, and the reason for doing so is simple: it's really important. More on that later.
Now, normally in this series I expand upon the methodology/tests conducted etc. to provide a more comprehensive insight into the mechanics of music piracy research. In this instance however, the methodology was pretty straightforward - it was a really comprehensive review of literature. Though not particularly exciting, it is worth mentioning exactly how this goes.
In effect, I spent years searching for and reviewing published work on music piracy in academic journals. Beginning with keywords like 'music piracy' and 'file sharing', I subsequently added more to focus in on particular areas of interest. Over time, I found myself with a large list of key authors and journals which published work in these areas; this guided more searches.
It was time consuming, but worthwhile.
It quickly became clear that the research methods used to explore music piracy were varied and often quite weak, by my standards. This is the focus of the article under review in this blog entry.
In the paper, which was itself a follow-up to a special issue of Convergence on digital piracy, I reviewed the conventional approaches used to measure music piracy and critiqued them by focusing on issues such as sampling, for example. Additionally, I defined some potential future methodologies which would benefit the literature, given the gaps observed from my literature review.
At a glance, the paper provides a nice summary of how music piracy research is often conducted.
It is one of the aims of this modest blog to provide you, dear reader, with information on how researchers arrive at their conclusions - not just discuss their conclusions. And with that, I urge you to give it a read.
Over and out.
Tweets @musicpiracyblog