A: ESE Author Q&A: Lisa Colledge



In August, European Science Editing featured an article by Lisa Colledge and Chris James from the research metrics team at Elsevier.  The paper, titled A “basket of metrics”—the best support for understanding journal merit, deals with one of the most interesting and pressing elements of scholarly publishing and academia – the use of statistics to assess values of published research.

We spoke to Lisa to understand more about the importance of the paper published in ESE, her work at Elsevier and projects she is involved in that impact academic communities.

The Colledge & James paper appeared in European Science Editing 41(3) in August 2015, and can be openly downloaded from the EASE site
here.
 

EASE:  Please introduce our readers to your ESE article.

Lisa Colledge: The paper is about using a “basket of metrics” to understand merit. Using several metrics gives more varied and nuanced insights into merit than is possible by using any one metric. The basket applies to every entity, not only journals, but also researchers, articles, and institutions, and we describe the various ways in which an entity can be excellent.

F
or a journal of course this is about the papers it publishes and how these are viewed and cited, but also about the characteristics of its editorial and author communities, and its impact on the general public.  We shared survey results that tested opinions about usage metrics, and confirm that one size of metric does not fit all, and that there is real appetite to use basket of metrics.


EASE: What is your main job role?

LC: Director of Research Metrics, at Elsevier

EASE: How long have you been involved in this area?


LC: I have been in this position since October 2014, but have been working with research metrics in various roles in Elsevier since 2006.

EASE: What are some of the innovative aspects you could tell us about your work?

LC: The most innovative aspect is about making research metrics practical, so that they can be used for benefit of research by everyone, beyond the very talented and specialized scientometricians.

EASE: What is the difference between scientometrics and bibliometrics? Or are these two terms interchangeable?

LC: I believe that these terms are often used interchangeably, but there are differences:

-  Bibliometrics refers to the metrics that you can derive from written information, typically that held in a library. It includes metrics about journals and other serials, counts of items published, counts of citations, and metrics that depend on affiliation information (like the amount of international collaboration).

-  Scientometrics refers to metrics that reflect the performance of science and technology. This encompasses bibliometrics, but goes further. Scientometrics includes an analysis of funder sources / funding, insights into the commercialization / links of research with enterprise, metrics about online views or discussions in F1000 or Twitter, influence of research on national or international policy and medical guidelines, for example.

When I talk about “metrics” I am using it as shorthand for the broadest picture of research – scientometrics, but definitely including bibliometrics which continue to be hugely important.


We have developed, through our community engagements, the “2 Golden Rules” to making research metrics usable: Golden Rule 1 is to always use quantitative metric-based input alongside qualitative opinion-based input, and Golden Rule 2 is to ensure that the quantitative, metrics part, of your input always relies on at least 2 metrics to prevent bias and encouragement of undesirable behaviour.  Championing this approach with the community, by embedding it throughout our tools, gets me out of bed in the morning.

EASE:  When you talk of your Golden Rules, it might be helpful for our readers, if you could give an example of two metrics you would use to substantiate a quantitative measure or assessment?

LC: A metric is a numerical value that represents some aspect of journal performance. There are all kinds of aspects of journal performance that you can represent as a number, for instance:

-  Number of submissions or items published are 2 examples of metrics. You could use each of these to calculate a third metric – growth


-  Number of citations or online views per item published are 2 further examples


-  Number of mentions in mass media, number of shares in scholarly tools like CiteULike and Mendeley, and number of times a journal’s content is discussed in F1000, are 3 further examples.


There are more examples given in the paper, in Figure 1, which is probably an easier way of communicating the information. The point is, that there is not only one way of a journal being excellent, and you wouldn’t only want a lot of journals that were all excellent in the same way. There are different ways of being good, and so you need to have different types of metrics (numbers) to reflect a more complete version of the picture – that’s Golden Rule number 2.

The fact that metrics can never give you the complete picture, no matter how many metrics you have, and that for this you need to combine them with opinion, expertise and judgement such as peer review (but neither can opinion, expertise and judgment give you the complete picture – you need to combine those with metrics), is Golden Rule number 1.


 Figure 1. A “basket of metrics” for understanding journal performance. From “A “basket of metrics”—the best support for understanding journal merit”, by L. Colledge, 2015, European Science Editing (41(3)), 61. Copyright 2015 by the European Association of Science Editors.



EASE:  When you talk of your Golden Rule 2 could you give an example of two metrics you would use to substantiate a quantitative measure or assessment?

LC:  Yes! The example is about Field-Weighted Citation Impact (FWCI) and Citations Per Publication (CPP):
-  FWCI is a very popular metric. It takes into account the different volume and speed of citations received by articles in different fields, of different types (e.g. article as compared to review), and of different ages; these are variables that can hide real differences in performance, if they’re not taken into account, so this is a common go-to metric. A FWCI of 1 means a journal or institution, or whatever, is cited exactly as you would expect, above 1 is above average citations, below 1 below average – if your FWCI is 2.63 it means you’re 263% of expected.

-  That’s useful information, but like all metrics FWCI has weaknesses. The normalization by field, type and age makes the method quite complex and it is not easy for someone to validate the calculation themselves. Another weakness is that 2.63 doesn’t tell you anything about the number of citations you’re talking about – it could be 3 citations, or 33, or 333.


-  Simply pairing FWCI with CPP addresses these weaknesses of FWCI. CPP is a simpler metric that can be checked in the database it’s based on, and it tells you whether you are talking about 3 citations, or 33, or 333 per publication.


-  Equally, using CPP on its own wouldn’t compensate for the differences in field, type and age, and wouldn’t give you any indication of whether 33.7 citations per publication was “good” (above average) or not – but if you combine with FWCI, you solve this easily.


EASE: What do you feel are your most significant work-related achievements?

LC: When I talk about the “2 Golden Rules” with members of the research community, they are seen as common sense – practical and sensible – and I find that that is a huge achievement.

I’ll also highlight the pride I feel in an input to and an output of the 2 Golden Rules:

Snowball Metrics is one of the inputs that has led to the development of the 2 Golden Rules, and I am privileged to have been involved in that project: Eight of the world’s most prestigious universities have reached consensus on how they want their performance to be measured so they can benchmark themselves against each other, apples to apples. The most-used metrics in SciVal, a flexible benchmarking tool, are Snowball Metrics, proving that they really resonate.

SciVal is an output of the 2 Golden Rules, and offers unparalleled flexibility to users to support their diverse questions in an intuitive way.

EASE: Do you have any interesting projects in the next year or so, that you are able to speak about?

LC: The “basket of metrics” is the logical outcome of Golden Rule 2. It describes, firstly, a wide range of research metrics to showcase many different ways of being excellent; and, secondly, it says that this range of metrics should be available for all of the entities that people want to measure, such researchers, journals, institutions, funders. The basket can provide useful intelligence to every question. We are currently focusing on extending the range of metrics that we offer to include novel metrics such as media mentions and Mendeley readership, and also on improving the presentation of metrics available for serials such as journals and conference proceedings.

EASE: Are you a member of EASE?

LC: Elsevier has individual memberships of EASE, and we are exploring options for further engagement. We are looking forward to participating in future EASE conferences.

EASE: What motivated you to write for European Science Editing?

LC: Chris James, my co-author, and I were invited to extend a short article that we had prepared for Elsevier Connect. The paper was about attitudes to metrics based on usage data, created when a user makes a request to an online service to view scholarly information. We jumped at the chance to write this “basket of metrics” article for ESE because we were able to put more context around the short article, and to reach the very important audience of journal editors who are so influential in building opinion in research.

EASE:
In what way is the topic of your paper important to you?

LC: Until the end of 2014, our tools were largely based on the well-known publication, citation and collaboration metrics. The addition of usage metrics to our offerings early in 2015 felt like the first practical test of the concepts of the 2 Golden Rules, and the basket of metrics. It was extremely exciting for me and Chris to be able to talk about usage metrics, and see the feedback on the questions we asked during the webinar coming in from attendees all over the world. The feedback was extremely positive, and validated and extended the concepts I’ve mentioned.

EASE: What impact do you hope this paper could have, and what changes could it make?

LC: I hope this paper helps to drive 3 changes:


-  Acceptance that metrics do have a place in research alongside peer review - definitely not instead of it.


-  Belief that using research metrics is common sense. They can help to answer questions, and build confidence in an answer


-  There is no such thing as the “best metric”. That’s nonsense, and it’s a waste of time to think and talk about it – every metric has weaknesses, and no metric is suitable for every question. It’s much more useful to think about the best metrics, plural, that can help to answer specific questions.

EASE:
If people want to read more about this subject, can you name one or two specific articles they should read?

LC:
You can find out more about how usage metrics can be helpful to you in this article “5 ways usage metrics can help you see the bigger picture”.

Elsevier's position on the use of research metrics in research assessment is described in 12 guiding principles:, and our response to the final report to which these principles contributed is available here in Elsevier’s approach to metrics.

EASE:
Are there any websites or other resources related to your paper they should seek out?

LC: The Snowball Metrics recipe book available at www.snowballmetrics.com/metrics.-------------------------------------------------------------------------------------------------------------



Lisa can be found on Twitter at @lisacolledge1 and @researchtrendy.

You can find Lisa’s article in the full August issue of the ESE Journal archive on the EASE website here.



Previous interviews in the ESE Author Q&A series can be found here.


Interview conducted by Duncan Nicholas of the EASE Council.







Comments