Search Engine Optimisation for Academics

Search Engine Optimisation for Academics

It’s an established part of internet humour that almost the only reason to use any other search engine is to get to the Google homepage. What gets overlooked here, however, is the process by which one’s research results are ranked and displayed. Type in a search phrase into PubMed or Google Scholar and you’ll get a list of links called a ‘search engine results page’ (SERP). Barring some internet randomness, you’re most likely to see a link to a highly-cited, reputed, and well-known research article at the top. And even if an enterprising individual were to write another research paper with the same title (in the hope that it will rank highly just by virtue of having the same name), it’s not very likely to show up anywhere close to the first results page. The internet is a literal ocean, which poses a real challenge for research professionals who are looking to make it to that front page with their own research papers and websites. After all, it’s rare for people to dig deeper than the links on the first page of their search results. This is where search engine optimisation (SEO) comes in handy—using these tools, professionals can get their work as close to the first results page as possible.

A little background

Now, SEO isn’t a new concept. In fact, it has been around since 1997, even before Google. The importance of search page rankings was mostly overlooked until Google arrived and began to index pages based on certain arbitrary criteria. Google uses small programmes called ‘crawlers’ (also known as ‘spiders’) which are given an ‘index’ of keywords to look for. The crawlers then check for URLs that could be associated with a particular word in its index. When the crawler finds a match—say it looks for the word ‘proteins’ and comes across the famous Lowry research paper on the Journal of Biological Chemistry’s website—it stores this information. Also, to rank websites (and the research papers hosted on them), the crawler gives weight to other factors, like the number of times a keyword is repeated in a paper, the number of times a research blog has been linked to other places on the internet, and so on.

The basics

The foremost task is to make a research paper easier to crawl by ensuring that it contains words that match its keyword research. This is important as people tend to search using generic phrases. Most university students looking for references are likely to use search phrases such as ‘research articles on stem therapy’ or ‘most cited biology papers of 2018’. Generally, the guidelines for Google Scholar are quite concise and require that a research paper be in the correct format (.pdf is preferred), with the title of the paper mentioned clearly at the top. To make it easier for Google to present papers with multiple authors, it also asks for the names of each author to be noted clearly on a separate line. Finally, a clear reference section allows for Google to check and assess the quality of the paper. The last point is very important, as it has been proven that citations play a major role in the way search engines like Google Scholar or PubMed treat research papers.

There are some caveats to this; simply having a larger number of citations won’t necessarily translate to better search rankings. But Google does seem to trust a paper more when it sees it mentioned elsewhere, and in trusted research journals and blogs. More keyword mentions isn’t always a good though, as mindlessly repeating the same set of keywords can lead to lower rankings. Search engines tend to penalise such articles for being ‘over-optimised’. This happens when the crawler sees that a particular set of keywords has been repeated at an unnaturally high frequency—resulting in the page being marked as spam. Even worse is Google Panda, a programme that Google runs to check if a page has been written for users or for search engines. If the programme decides that it is the latter, it could remove the page from the search results. This problem can be fixed by rewriting and editing the content, but as Google only runs this programme once every few months, the webpage/article might not recover traffic very quickly.

Generating original content

A good way to get people to link to work at varying stages of the research process is to start a blog or a newsletter. For research professionals, a blog or personal website could serve as a repository for unpublished papers, hypotheses, and concept papers. The advantage of creating such a repository is that many research papers published in journals exist behind paywalls. So, a researcher could simply provide a link to their published material (which is behind a paywall) from their blog (which is open access). If the material in their personal repository is good enough, it could encourage people to pay to access the published work. Alternatively, the reverse is also possible; a researcher could include links to their blogs in their published papers. By encouraging other researchers and students to access their trove of unpublished work, a researcher could increase their value and name recognition. While this works for attracting interaction from the academic community, a researcher may have to use other tools to get the attention of regular internet users. This can be achieved by using meta-tags, which essentially describe the purpose of a website in further detail. For instance, a research paper could be tagged as something related to science or proteins, but will have a meta-tag description like ‘Protein Measurement with the Folin Phenol Reagent’. Meta-tags can thus be freely written to target the search engine—they do not have the restriction of being ‘natural-sounding’ like regular content does.

Earning recognition

The importance of adopting the alternative strategies described above cannot be understated, since a research paper can greatly improve its visibility by earning more citations. Some have even advocated that research professionals should become Wikipedia contributors in order to improve their chances of getting noticed by Google. This makes sense, as Wikipedia is a trusted site (by search engines at least) and usually ranks at the top of search results.

There is just the impact of open access vs featuring in a prestigious journal, and optimising citations left to discuss; essentially, in the latter case, Google Scholar provides the submitting scholar with a citation index that allows them to tweak their citations list so that the more prestigious ones are highlighted over others. It also gives a researcher the option of setting an ‘alert’ every time their paper earns a citation. By carefully monitoring these alerts and the corresponding journals to which the citations are linked, a researcher can achieve limited control over their citations.

But why?

On the surface, it may seem strange that research professionals need to work so hard to improve the SEO of their blogs and research papers. But the fact remains that the internet is a hypercompetitive environment where even the smallest advantage could pay off hugely, especially since all the other factors—access, search algorithm, and software—that control the internet are more or less the same for every user. All it takes is one specific combination of words in a title or a single good citation in a research paper for it to increase its visibility multi-fold. And with the volatility of the internet, this could even happen to a research paper that reports findings identical to other similar studies just because it is better optimised. While this might sound unfair at the level of the individual paper, overall, SEO has also helped weed out fraudulent research articles, fake pieces written by spambots, and paid links. It has enhanced the experience of users and rewarded those research professionals who have managed to achieve a balance between research quality and search optimisation. Now, it’s up to scholars to take advantage of SEO and make it easier for the cream to rise to the top.