The analysis of websites from a cultural and media studies perspective raises numerous methodological hurdles. When conducting audience research using a particular genre of newspaper or magazine there is a centrality to the text(s) being considered. In contrast, researchers of the Internet can often feel overwhelmed by the vastness and global nature of web communication, which is in constant state of flux and development.
The following observations on methodology are based on research conducted on the levels of user-interaction offered by British local newspaper websites.
1. Knowing where to start…
To conduct an analysis, one must first determine a sample of websites.
‘Given the large volume of WWW [World Wide Web] texts and that these texts are intertextually connected to each other, a critical question concerning textual analysis is deciding on what could be considered a starting point.’ (Jones,1999).
The Internet is defined as being a ‘network of networks’ and it is its inter-connected nature which has proved so troublesome for researchers.
2.It can be helpful to view websites as academic journal articles.
We can view websites as being a bit like academic journals. Whilst websites are not ‘peer-reviewed’ as such, they do share a key quality of a journal in so far as status and prominence is achieved through the number of times a website is ‘cited’ online.
The number of links in to a website reflects its trust, prestige, authority and credibility within the Internet community (Park, 2003). Similarly, a key way that exposure on search engines, such as Google, is gained is through the number of times a site is linked to by other popular sites.
In the past news sites were quite ‘insular’ in nature – obtaining status within the web community played second fiddle to the old-fashioned logic that the way to generate revenue was to keep users on a website. So you would find some news sites where the only links out were to the websites of advertisers and commercial sponsors.
Web producers have learned to be more generous with the number of hyperlinks they make. They hope that by doing so this will encourage respected sites to create links back. It should be noted that some large publishers STILL do not understand this (note the recent statements by Rupert Murdoch regarding Google News).
It is legitimate for academics to use hyperlink analysis to determine a sample of sites to study and to assess potential influence in the online community.
The benefits of hyperlink analysis are highlighted by Park (2003): ‘Patterns of hyperlinks designed or modified by individuals or organizations who own websites reflect the communicative choices, agendas, or ends of the owners. Thus, the structural pattern of hyperlinks in their websites serves a particular social or communicative function.’
It’s possible to use commercial software, such as LinkChecker Pro, to conduct analysis of website structures.
3. The accuracy of user data is forever in doubt.
Those seeking accurate newspaper and magazine readership figures may naturally drift towards the website of ABC for accurate data.
Unfortunately, no such universally agreed measurement is in place for web audience figures. To give a crude example, Google Analytics is used to monitor the traffic to this blog. But the data it produces differs to that which rival traffic monitoring systems such as SiteMeter records.
Large news sites tend to use traffic monitoring services from companies such as ABCe, ComScore or Hitwise. Peter Kirwin (Forget about ABCe; let’s have an old-fashioned fight about traffic numbers) highlights the discrepancies between figures from these rival website data monitors and asks for more transparency in their methodologies.
To put an additional spanner in the works, it’s an interesting exercise to compare ‘official’ user figures with those generated by external sites such a Compete.com. It’s almost innevitable that there will be discrepancies in traffic data based on companies various methodologies.
4. The problems of using Google.
Unless a researcher likes the idea of writing their own software, they may be reliant on Google (or other commercial providers) to seek out websites to study or to search within sampled websites.
Using a search engines in academic research presents many challenges. Witten (2007): ‘Their architecture is shrouded in mystery. The algorithms they follow are secret. They are accountable no one.’ No single search engine crawls the entire web and we have no idea what sites / pages are missing.
Synder (1999) suggests that the problems with using search engines in link analysis are market-driven, rather than anything particularly wrong with the technology itself. He urged search engine companies to become as transparent as possible in the way they operate, so that academic researchers can use them fully.
5. A website is never complete.
With most media the creative process has already taken place before an artifact is published. On a news website content changes day-by-day or perhaps hour-by-hour. Postings disappear, the headlines on news stories rewritten and features that once appeared on homepages are moved to other places on a site.
Imfeld (Salwen, 2005) highlights the issue of conducting research ‘in this period of almost constant renovation of websites’. Unlike any other type of media output, a website is forever changing and is never fully complete.
This blog post has outlined some of the the problems of conducting serious web analysis. It’s not the aim to locate or suggest solutions. From personal experience, attempts to examine techniques used to study magazines and newspapers and them simply transplant them into a web context have been far from successful and could be considered naive at best.
Jones, S. (1999). Doing internet research: Critical issues and methods for examining the net. Thousand Oaks, Calif. ; London: Sage Publications.
Park, H. W. (2003). Hyperlink network analysis: A new method for the study of social structure on the web. Connections, 25(1), 49-61
Salwen, M. B.(2005). Online news and the public. Mahwah, N.J. ; London: Lawrence Erlbaum.
Witten, I. H. (2007). In Gori M., Numerico T. (Eds.), Web dragons: Inside the myths of search engine technology. Oxford: Morgan Kaufmann.