Don’t sort by relevance
Many academics may be inadvertently conducting literature searches that prioritise the “greatest hits” of white, Western, male authors, despite being aware of the need to diversify their sources, a report suggests.
The study, conducted by University of Cambridge researchers for the Society for Research into Higher Education, recommends that academics generally disable the ‘sort by relevance’ feature on scholarly databases and calls for users to be made more aware of the associated risks.
A growing number of online platforms are being used to search for scholarly literature. There are longstanding concerns that these may be pre-selecting content and making some material more visible than others, using opaque algorithms which determine their ‘relevance’ on behalf of the user.
Experts have previously warned that they risk prioritising highly-cited papers, perpetuating existing biases in academic publishing that over-represent the work of scholars in high-income countries – predominantly white men – while overlooking other contributions.
The Cambridge study found that many academics are aware of potentially biased search returns in Google Scholar, which is by far the most popular literature search platform. To compensate, however, they use other specialised sites such as JSTOR, PubMed and Semantic Scholar – even though these often have similarly vague ‘sort by relevance’ features.
"If you look at the first few pages of many of the results these searches return, you’re basically getting the greatest hits of established researchers in that field."
The study was carried out by Dr Katy Jordan and Dr Sally Po Tsai, both from the Faculty of Education at the University of Cambridge.
“If you look at the first few pages of many of the results these searches return, you’re basically getting the greatest hits of established researchers in that field,” Jordan said. “Based on what we know about wider trends in citation practices, women, scholars of colour, or those from the Global South, are less likely to feature. Early career researchers also face a disadvantage.”
The study surveyed 100 academics about the platforms they used, their perceived benefits and constraints, and their assumptions about how search results are ranked. Some also participated in in-depth video interviews, during which they shared their screen while undertaking various searches.
Jordan and Tsai also analysed 14 of the largest academic bibliographic databases. Of these, ‘sort by relevance’ was the default setting in all but two. Half provided no information at all about how ‘relevance’ was determined and the remainder only provided limited details.
“We were really surprised by how widespread the sort by relevance issue is,” Jordan said. “It’s the baseline offering of virtually every academic literature platform or social networking site, but the information about how they’re calculating ‘relevance’ is very patchy. There are risks with all the approaches being taken.”
Academics seemed aware of these risks when it came to Google Scholar. Most respondents were unsure about how it determined ‘relevance’, but referred to what the researchers describe as perceived “algorithmic magic”. As one participant put it: “I think this is a question we would all like answered. It’s a total black box.”
"Many of the academics we spoke to are on the right track by relying on multiple platforms rather than just one. The simplest thing they can do is switch off sort by relevance in each case."
This led most academics to use the platform cautiously, but they also often turned to other search platforms with similar shortcomings. None of the 100 participants expressed concerns about “algorithmic magic” with reference to any of these other academic database. Instead, they often saw these as compensating for the drawbacks Google Scholar presented.
Since these other platforms also use opaque algorithms and appear, in many cases, to sort by relevance based on criteria such as citations and reputation metrics, the study suggests that some academics may, inadvertently, be reproducing the very biases they are trying to avoid.
Previous research highlights how problematic such criteria can be. For instance, one study found that research articles with female first authors consistently receive fewer citations than those with male first authors. Other research has shown that 30% of articles in supposedly ‘international’, peer-reviewed journals are by US academics; most are from Europe and North America, and only 1% originate from Sub-Saharan Africa. Another study concluded that papers not published in English are “systematically relegated to positions that make them virtually invisible”.
The report urges academic search platform managers to be more transparent and provide clear, accessible definitions of how they determine ‘relevance’. It also suggests a “radical rethink” of ranking algorithms, given the well-known biases in citation practices.
The researchers recommend that universities and journals systematically endorse ‘positive citation practices’, which some academics and institutions have already adopted on a piecemeal basis. They add that universities could consider training academics on better use of search tools within staff development programmes.
“Many of the academics we spoke to are on the right track by relying on multiple platforms rather than just one,” Jordan added. “The simplest thing they can do is switch off sort by relevance and customise their settings in each case.”
The report, ‘Sort by relevance’: Exploring assumptions about algorithm-mediated academic literature searches, is available from the Society for Research into Higher Education website.
Image in this story: andrew_t8, pixabay.