This article is a response to a piece written by Elsevier in response to an opinion piece I published in The Guardian, concerning the involvement of Elsevier in the European Commission’s Open Science Monitor. Comments from Elsevier are provided in italics, and my responses to them are in bold.
Firstly, thank you to Elsevier and Dr. Nick Fowler, Elsevier’s Chief Academic Officer and Managing Director of Research Networks, for taking the time to respond to my opinion piece on The Guardian. Dr. Fowler’s response raises accusations of misinformation on my part. This is not the case – all the arguments I made were correct, cogent and relevant, as I will explain further below. The Elsevier response draws implications from perceived omissions in my original piece. Given the strict word limit set by The Guardian, it would not have been possible to address all the points raised by Dr. Fowler, even if they were all relevant. A full criticism of this would take much more space, which can perhaps be emphasised best in our draft complaint about this issue to the European Ombudsman. Nevertheless, I am happy to here fill in the gaps with further arguments and evidence which only strengthen my case, and further show the validity of my arguments. I will not respond to the points in the response article which have little or nothing to do with the present issue (the Open Science Monitor).
The bid was, of course, part of an open procurement process as part of a competitive tender, a fact that Dr. Tennant fails to mention.
I am fully aware that Elsevier are subcontractors within a consortium led by the Centre for Science and Technology Studies, and also including The Lisbon Council for Economic Competitiveness and Social Renewal, and the ESADE Business & Law School: this information is clearly provided on the first page of the methodological note that was referenced within the article.
However, further details of the tender process are still fairly scant and non-transparent. The tender award notification is available online, yes, but the next public information was that Elsevier had been awarded it. So, this response misses out numerous key questions about the nature of the process. Some general points regarding the lack of transparency are better addressed to the EC (and will be in a separate communication to the Ombudsman), but are worth mentioning here as general grounds for the more specific criticisms of Elsevier’s involvement as subcontractors:
- How did the 3 bids received for the tender score on the specific criteria that were used to select the contractor? Why is this information not required to be made public?
- Who evaluated the suitability of each candidate? Were any independent external experts involved in the evaluation process?
- Was there a consultation process involved?
- Why are tenderers are only required to identify subcontractors whose share of the contract is above 15%?
- Was the identity of this subcontractor made known to the EU during the tender process?
- Was a risk analysis performed as to the ramifications of the choice of subcontractor?
And some specific points focussing on the subcontractor themselves:
- How do the consortium and the EU resolve the motivations of Elsevier, which have been historically anti-open in many respects, with the intrinsic motivations behind open science, including financing and governance? (Note: this point was also raised in my original article, and unaddressed).
- Who is accountable for the process itself, including resolution of internal disputes during (not just the performance of the contract as a whole), with the data providers (Elsevier), European Commission, and data analysts operating as three non-independent parties.
- Given the EU’s emphasis on Open Science, including Open Data, why is there (apparently) no requirement to insist that the Open Science Monitor must be based upon open data, open standards, and open tools (with appropriate licenses for accessibility) as a matter of principle?
- How will the comments on the indicators, many of which specifically also mention the bias towards Elsevier services (including my own comments), be handled as part of the consultation process?
These are critical questions to be asked (and answered) during a time when there are large and ongoing disputes in Europe between the big publishers and universities, for which Elsevier have been a particular focal point of discord; another key point made in my article which remains unaddressed. There is widespread unease within the European higher education sector that the scholarly publishing market, and the contracts involved in them, are not being administered in a correct way (e.g., due to the anti-competitive nature of non-disclosure agreements), and therefore questions such as these are highly relevant at the present. It is less the responsibility of me to point out all of these issues, and more of the Elsevier, the consortium, and the EC to be fully transparent about this process.
More importantly, Dr. Tennant appears to be questioning the notion that a private sector company generally, and Elsevier specifically, can be a partner to science.
This is a misrepresentation. I am questioning whether Elsevier has too large a conflict of interest to be a suitable sub-contracting partner within the Open Science Monitor. This question, to which my answer is obviously yes, is predicated on Elsevier’s oligarchic scope and complex portfolio of interests, which include publishing and other scientific workflow services (tied to specific business models and profit motives), as well as provision of metrics services. As a subcontractor within the Open Science Monitor consortium, Elsevier are clearly in a position to contribute to defining priority metrics, which decisions may favour their metrics, and/or publishing products in future.
He appears to doubt that Elsevier can usefully and impartially support the European Commission as it seeks to gather relevant and timely indicators on the development of open science inside and outside of Europe to understand trends in the field better.
Regarding impartiality, this is fully correct. The past actions of Elsevier are evidence to support this; perhaps the best most recent evidence of this was the demonstrated bias and COI associated with the CiteScore metric, which I mentioned in the original article (providing a detailed overview of all criticisms levied at Elsevier is clearly beyond the scope of the original article and this response). I do not believe that an organisation with such a clear COI can be impartial in this case, irrespective of whether the methods for data collection, the data itself, and the methodological protocols are made transparent. The impact of this could be that better, alternative metrics are excluded from the Monitor, while increasing the market position of Elsevier, and potentially creating dependencies on their research workflow products.
The fact that the COI remains, irrespective of whether Elsevier are useful or not, is the key point that is not addressed here. Elsevier are now in a position where they will be monitoring and evaluating the very same science that they, and their competitors, sell as their primary products. Furthermore, the metrics and data sources used in the evaluation, are biased towards those owned and operated by Elsevier, which creates an inherent bias and COI, to the exclusivity of their competitors and other primary data sources. This key aspect of my article was not addressed by Elsevier.
Elsevier embraces the principles of open science.
Not really, it embraces its own version of open science, and simply asserting that they do does not make it so. Their track record in this regard is mixed at best, so that it might continue to enjoy its large profit margins; something well within its right as a company, but disingenuous to pretend it is about embracing open science. Indeed, a recent independent report found that Elsevier scores quite low in their openness assessment. I would love to hear how Elsevier is a supporter of fairness, equality, rigour, transparency (in pricing, for one), open source, zero-length embargoes, open data, transparent research assessment, open licensing (CC BY or CC-0), and open citations (or even just some of these).
At a more fundamental point, you have provided information that leads us to conclude that around 94% of Elsevier’s annual article output is still paywalled content (see your comments, addressed further below). By preventing access to research, Elsevier actively inhibits the use of useful knowledge and tools that teachers, citizens, education unions, researchers, policymakers, and other potential users require in order to meet the everyday challenges of education systems, and our wider societies. Elsevier’s business model of knowledge commodification undermines the basic principle that all people have an equal right of access to knowledge and education, irrespective of their background or status, but also explicitly discriminates against the financially underprivileged. I would welcome a discussion on how this demonstrates Elsevier’s alignment with the principles of open science.
Advancing it is part of our purpose to serve science and health, and we have unequivocally committed to this publicly through actions such as signing on to the European Open Science Cloud.
While signing on to EOSC is great, actions will speak louder than words. So far, signing something that has not been even launched yet is not exactly unequivocal evidence to support a commitment to serving science and health. Of note, Elsevier have also not signed onto initiatives like the San Francisco Declaration on Research Assessment (DORA). Elsevier have also explicitly ignored other open initiatives such as the I4OC (Initiative for Open Citations). Cherry picking examples to support Elsevier’s stance, while ignoring evidence to the contrary, is not a strong basis for arguments.
Furthermore, Elsevier have an incredible history of anti-open lobbying, stifling the development of open science primarily in the UK and USA. Many of these cases were mentioned in my original article, but there are numerous others that I also could have selected from. Now, the response from Will Gunn on Twitter was that these cases are more than 11 years old now, which is correct. However, they came at pivotal times during the development of OA, and trying to dismiss them, as well as the current impact they have, is not helpful. Also, Elsevier only withdrew their support for the notoriously anti-scientific Research Works Act (RWA) in September 2012, which is a relatively recent event. When things like trust and legitimacy are important for supporting the decisions of the EC in matters like this, history matters. Perhaps Elsevier could also shed light on the potential role that the 6 RELX lobbyists for the EC might have played in the developments of the Open Science Monitor.
That is why we receive 1.5 million new article submissions every year — a number that keeps growing: because researchers want their article to be one of the 400,000+ submissions we accept for publication each year.
No-one doubts that Elsevier’s numbers are impressive in this respect; especially in that it rejects more than 1 million research articles every year. However, such a high volume is more due to the perverse focus on high impact publications for career assessment and progression. Elsevier has helped create this environment where the focus is on the container and not the content, as selling such publications is still its primary revenue source. The role of the Open Science Monitor is actually to move away from such indicators towards a fair and more responsible rewarding system for researchers within an Open Science system – something also recommended by the EC Working Group on Altmetrics. Therefore touting this as a success factor justifying Elsevier’s role in the Monitor is counter-intuitive and contradictory to its primary purpose.
We are one of the leading open access publishers, and we make more articles openly available than any other publisher.
OK, it does not matter how many times this is said, but this is called selective reading of data. In 2012-2015, Elsevier published almost 1.4 million research articles. In 2016, it published 25,000 Open Access articles (27,000 in 2017). If it publishes now 400,000 articles a year (as stated above), this means that the vast majority of its content is still paywalled. If Elsevier want to call themselves a leading OA publisher, the same data indicates that they are still virtually the largest paywall-based publisher, publishing around 375,000 paywalled articles each year. Proportionally, this means that around 1 in 16, or around 6% of Elsevier’s articles are actually OA; which many might say makes it one of the smallest OA publishers, when so many now have 100%. This also clearly does not lay credence to your claim that Elsevier are embracing open science.
Note, that these articles often do not even fulfill the widely accepted requirements for OA, as authors are asked to transfer ‘nominal copyright’ to Elsevier, which in essence is almost the same thing as a traditional copyright transfer. Furthermore, the vast majority of these articles are published in hybrid journals. This is now becoming widely recognised as an unsustainable approach to OA, and did not have any of the intended effects that were first articulated (e.g., in creating a transition to full OA, and creating a functioning market around APCs). Calling yourselves a leading OA publisher is factually incorrect, and again also ignores the history of lobbying against progressive OA policies.
We make freely available open science products and services we have developed and acquired to enable scientists to collaborate, post their early findings, store their data and showcase their output.
Again, simply providing tools for science does not make them ‘open science’. Free is also not equivalent to open, as the open source movement clearly has demonstrated.
We make Mendeley — a reference manager and collaboration solution — freely available for the millions of researchers that use it while also adding free data search and storage services.
There is little guarantee from Elsevier that this will always be the case. Will Mendeley always be freely available? The fact that in a recent update, Elsevier encrypted its databases, does not lend much faith to this. However, it certainly is not within the spirit or principles of openness. And seeing as Scopus features predominantly within the Monitor, will there be plans to make this freely accessible now too? Otherwise this is a clear violation of the principles of open science again.
We also acquired and developed free-to-use Plum Analytics and NewsFlo to make research metrics more inclusive.
It remains inherently unclear what is meant by ‘inclusive metrics’ here. For me, inclusivity as an intrinsic part of open science, and I fail to see how Elsevier align themselves with this, based on the numerous comments above. However, it is clear that Elsevier are moving into the data analytics scene, as part of a business-wide strategic reorganisation.
We have co-developed CiteScore and Snowball Metrics with the research community – all of which are open, transparent, and free indicators.
And which are also biased towards Elsevier. As I mentioned in the previous article: “An independent analysis showed that titles owned by Springer Nature, perhaps Elsevier’s biggest competitor, scored 40% lower and Elsevier titles 25% higher when using CiteScore rather than previous journal impact factors.” This does not lend support to Elsevier products being unbiased towards themselves (or impartial, as mentioned above), and against their competitors. Transparent or not, and irrespective of the motives, this is unacceptable, and a clear indication of inherent bias within the data and metrics.
The largest pre-print server in Social Sciences — SSRN — is free for use, and we have extended it across many other subject areas to enable the free posting and circulation of preprints in those disciplines too. Elsevier’s bepress and Pure enable universities to automate the hosting of manuscripts and datasets in their institutional repositories for readers to access for free.
OK, I did not mention any of these aspects during my article, but will comment on them here anyway, as it is important to challenge these assertions. Firstly, as history tells us, just because a product is free now, it does not mean it always will be, especially when controlled by a commercial entity. Secondly, many consider these services mentioned to be ‘trojan horses’ to enable Elsevier to infiltrate and control parts of university systems; even Steven Harnad, a prominent leader of the Open Access community, has echoed this, calling Pure self-interested and exploitative. An article on Elsevier’s own website states that the community remains divided over the acquisition of bepress too. Thirdly, Elsevier has pulled perfectly legal content from SSRN, without even notifying the authors. Either way, this does not appear particularly relevant to the Monitor.
For that reason, the consortium was careful to exclude a bias towards Elsevier products in the monitor’s methodology – another point ignored by Dr. Tennant.
I am not quite sure what this means. The bias is so clearly apparent in the methodology with Elsevier as the sole subcontractor. Section 2.1 is dominated by Scopus, over all competitive services, including those such as Web of Science or Dimensions; indeed, the replacement of Web of Science by Scopus is made explicit in the Annex accompanying the methodological note (pages 12 and 14). This exclusivity is clear, despite the well-known biases associated with using a single citation database for research evaluation purposes. Section 2.3.4 is also based entirely on Elsevier products. This is a bias which Elsevier stands to benefit financially from, and it is exclusive of Elsevier’s competitors, which creates a fundamental COI.
In the comments that the EC is offering on the methodology, this bias is also repeatedly pointed out by numerous other individuals (myself included). I would advise the consortium to treat this issue with serious consideration, as it has serious ramifications for the future of Open Science in the EU.
By using almost exclusively Elsevier-based services, such as Mendeley, Scopus, and Plum Analytics, subcontracting to solely Elsevier creates an inherent bias in the data sources:
- The potential direction and size of these biases is largely unknown at the present.
- This is partly a function of the products (metrics) and data themselves being proprietary, and an irresponsible approach towards metrics usage for evaluation. Such an action is against the recommendations of the EC Expert Group on Altmetrics/Next-generation metrics.
- The metrics proposed to be used for the monitor are not acquired by an independent body, but based on Elsevier products and services, creating an inherent bias in the data sources.
- The fact that Elsevier is a publisher offering services that monitor scholarly publishing also presents a serious conflict of interest, and does not respect current competition laws.
- This also actively discriminates against the competitors of Elsevier, creating unfair market conditions around Open Science evaluation and metrics.
Why would anyone seek to exclude commercial players like Elsevier from their vision of open science?
This is another straw man argument. No one has said this. However, what I am fully aware of are several key things:
- That Elsevier have an enormous history of fighting against the development of Open Science.
- That this fighting is still very much ongoing, particularly in Europe.
- That there is an inherent COI in having Elsevier monitoring Open Science.
However, to answer your question honestly, here are some potential reasons:
- The recent EUA report demonstrating the amount which universities are currently spending on publishing, much of which goes to Elsevier.
- The fact that, now with the help of the power afforded to them by collective bargaining, university consortia are now cancelling contracts with commercial publishers, including Elsevier. This is also happening with individual universities, including in North America.
- As mentioned above, they have a profound history of anti-open lobbying. This history is not easily forgotten or undone.
So, really, it is not ‘commercial players like Elsevier’, but really just Elsevier (and only in this instance), based on their unique history of business practices. And the follow on issue that it is just a single entity being subcontracted, whether commercial or not, that creates further issues around biased data sources.
Given the $500 billion spent annually on academic and government research globally, is it feasible for the public sector alone to deliver the data, tools and services required for open science? How open can his vision be if it is closed to the possibilities offered by the private sector?
OK, two points here. The first sentence, I believe Dr. Fowler answers his own question: yes. But no one has said that we want the public sector alone to do this; and it is also not a clear cut dichotomy between public and private sectors. Nowhere does anything about this appear in my article either, answering the second question, which is another straw man argument. Much of the rest of the article after this is not particularly relevant to the original piece, so I will not comment on it here.
Nonetheless, the two key points of my original article remain virtually unchallenged:
- That there are substantial issues surrounding the transparency of the process for awarding of the subcontract to Elsevier.
- The COI that arises from Elsevier monitoring things that are related to the primary products it sells, to the detriment of its competitors.
Now, I would love to believe that Elsevier has truly turned around and is now a devotee towards Open Science. I know that there are many talented people, with immense knowledge and skills, under Elsevier’s employment. However, this post does not convince me of such a turn, and instead has been quite evasive of the key issues my original article attempted to convey. There is little evidence to support the assertion that my original statements were misleading or misinformed, and indeed, my responses here demonstrate that, if anything, the inverse is true.
If Elsevier wish to further discuss these two critical points that were raised in the original article of mine, and now expanded upon here, that would be welcomed.