We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Altmetrics Conference - Annotations Panel

00:00

Formal Metadata

Title
Altmetrics Conference - Annotations Panel
Title of Series
Number of Parts
14
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
- Annotation in Researcher and Publishing Workflows - Annotation in Publisher and Researcher Workflow - Annotation and Metrics on Cambridge Core ...
James Waddell Alexander IIBitMedical imagingDuality (mathematics)Goodness of fitPresentation of a groupWeb crawlerSequenceHeegaard splittingGroup actionHypothesisComputer animationLecture/Conference
TouchscreenOpen sourceData conversionRow (database)MereologyCodeWebsiteMultiplication signCollaborationismBitRevision controlGreatest elementTerm (mathematics)Content (media)
CollaborationismMultiplication signGroup actionSlide ruleBitStudent's t-testMetric systemLecture/Conference
InformationTrailComputing platformCASE <Informatik>Data integrityWeb pageGroup actionProduct (business)Centralizer and normalizerBitMathematicsStandard deviationOpen setIntegrated development environmentMechanism designFront and back endsDifferent (Kate Ryan album)Link (knot theory)AuthorizationBlogDocument management systemLanding pageResultantService (economics)Connected spaceStatisticsComplex (psychology)MereologyPrice indexElektronische ZeitschriftSubstitute goodContent (media)INTEGRALInstance (computer science)FrequencySpring (hydrology)Peer-to-peerDescriptive statisticsObject (grammar)HypothesisLevel (video gaming)WebsiteSlide ruleSoftware frameworkRoundness (object)Video gameLecture/Conference
Bit rateSheaf (mathematics)DatabaseResultantOpen setSoftware bugComputing platformData miningText miningDifferent (Kate Ryan album)Decision theoryGroup actionQuicksortLink (knot theory)ScalabilityTerm (mathematics)Suite (music)Visualization (computer graphics)Functional (mathematics)Cartesian coordinate systemNumberUser interfaceTask (computing)AlgorithmObservational studyQuery languageProcedural programmingOverlay-NetzAssociative propertyRange (statistics)Interactive televisionLecture/Conference
Type theoryLecture/Conference
DigitizingStrategy gameComputing platformInterface (computing)Metric systemProcess (computing)BitDemosceneService (economics)Lecture/Conference
Strategy gameWordSlide ruleMaterialization (paranormal)MereologyAnalytic setConstraint (mathematics)Source codeContent (media)Software frameworkAuthorizationLengthCASE <Informatik>HypothesisImplementationTheory of relativityDifferent (Kate Ryan album)InformationResultantType theoryStudent's t-testCollaborationismOpen setPoint (geometry)Domain nameBitExistenceSoftware developerTime zoneLink (knot theory)Repository (publishing)Chemical equationFile archiverCore dumpQuicksortRaw image formatMultiplication signLattice (order)Category of beingConsistencyComputing platformProcess (computing)Translation (relic)Arithmetic meanFundamental theorem of algebraSoftware repositoryFocus (optics)Lecture/Conference
Different (Kate Ryan album)InformationFile format1 (number)AuthorizationBitContent (media)CollaborationismMetric systemQuicksortNumberNatural numberElement (mathematics)Computer configurationPhase transitionContext awarenessLine (geometry)HypothesisHybrid computerReading (process)Category of beingArithmetic progressionTerm (mathematics)Computing platformMereologySpacetimeData managementoutputVector potentialCore dumpType theoryLecture/Conference
View (database)Data managementVariable (mathematics)Power (physics)QuicksortDecision theoryNumberMereologyTrailTelecommunicationEmailFlow separationContent (media)GUI widgetData structureComputing platformStandard deviationFile formatOrder (biology)Point (geometry)Different (Kate Ryan album)WebsiteCollaborationismStatisticsMultiplication signMarginal distributionCASE <Informatik>Descriptive statisticsType theoryPosition operator2 (number)Universe (mathematics)SoftwareWorld Wide Web ConsortiumMathematicsFunctional (mathematics)Direction (geometry)Table (information)Row (database)Self-organizationRevision controlCommitment schemePairwise comparisonStudent's t-testContrast (vision)Metric system1 (number)Coordinate systemPresentation of a groupLine (geometry)Natural numberAuthorizationTerm (mathematics)Shared memoryInteractive televisionService (economics)Group actionGame controllerInformation technology consultingSpring (hydrology)Web pagePhysical systemServer (computing)Latent heatArithmetic meanDynamical systemLecture/Conference
Transcript: English(auto-generated)
Okay, let's get started. Although I'm a splitting image of you, I am not. But I'll be chairing this. We start with Heather Stames, and actually maybe a little bit of background.
We, as organizers, or the organizing committee, we were wondering what... So we're going to open up the agenda this year, so... Maybe I need coffee. So we were wondering, is or are annotations the new thing?
Should they be also in the donut, or wherever else? Only the long things, the spider. And so that's why we organized the session, and we have a couple of people here who know more about annotations above average, I guess. And they'll each give a short presentation, and then we'll have a group discussion
where we also invite you to participate. So the first one is Heather Stames from Hypothesis, together with... Is it going to be a dual show? No, it's a sequential. Sequential, good.
So it's Heather from Hypothesis. Thanks so much, Martijn, and thanks to the organizing committee
for presenting our combined sessions. As Martijn said, we wanted to do just a little bit of presentations to kind of kick things off. So what we'll be talking about is annotation in both researcher and publishing workflows. A little bit about Hypothesis, if you've not heard of us.
We are a mission-driven nonprofit based on open source code. Across the bottom of the screen there, you can see some of our funders. We've been around since about 2011, started to work with publishers in 2016 when we collaborated with eLife to create some publishers' specific features. We also have a site in the education space, which is managed by my colleague Jeremy Dean.
And if anyone has questions about that, I'm happy to talk about it. When we talk about annotation, we think of it in terms of layers. So the annotation is on the version of record regardless of the format, and there could be multiple conversations happening at the same time.
So depending upon what you've come to the content to do, you could be part of a private collaboration group, look at the public annotations, or be part of a publisher workflow. Last week we crossed 3.7 million annotations. My slide is old. It's from last year, but I need to get a new slide.
But it roughly follows when the peaks and troughs match the academic year, because there's a lot of students annotating. Of this total, about 20% are public annotations, 20% are completely private, and the rest are in collaboration groups. We're at about 25,000 collaboration groups of varying sizes across the web.
So I think when we talk about annotations, we tend to think about the public, but one of the things that I hope we can get into in the question time is a little bit about metrics and how private and private group collaboration can provide insight.
So just a little bit about how we're working with publishers. I mentioned that eLife was the first publisher we worked with to build specific features. eLife has an open group across their content. This means anyone can participate, so it's world-readable and world-rightable.
eLife does use their own account system, so if you want to play with it, you can make an eLife profile, have your work in handy, and you can participate there. They published a blog post earlier this year on the different ways that they found, even in the early days, that folks were using annotation. I can share that with folks who are interested.
One of the interesting ways that eLife has been using their eLife annotation layer is to add information on corrections and updates. It's not a substitute for issuing an official correction, but it's a way to draw more attention to, in this case on this slide, there was a citation that was missing, so they added that in there.
We also started to work this spring with the American Diabetes Association, and they had a different use case. Every January, they published the medical standards of care in diabetes. They wanted a mechanism to be able to update that without having to wait until January 2019. This is a world-readable annotation layer,
but the only folks who can add to this are folks from ADA. They've done a really great job with it. They've really treated the annotations as first-class research objects. Each one of them has the date that the annotation was published, the date it was approved by the publication committee,
and a citation, so that you can take advantage of that as a researcher. All of the publishers who utilize hypothesis accounts in the back end get a group page like this that you can see across the entirety of the publisher or the journal or the book, depending on how they set it up.
You can see all of the annotations that have been made on the content and explore them through that mechanism. Annotation is also interesting in peer review, both traditional peer review and even open peer review. This is a screenshot from an e-journal press integration with their manuscript submission system. They've created some specific tags that the reviewers can select from.
They've created the ability to do single-blind or double-blind peer review, and it all integrates with their peer review dashboard. That's now available, as I understand, from anyone who uses e-journal press. I just wanted to draw a little bit of attention to how annotation can be used
as part of publisher workflow. Last year in the fall, Springer Publishing decided to use a private annotation group when they were adding over 8,000 chapters of books to their new Springer Connect platform that Highwire had built for them.
They made a group that included their production department and their copy editors, and they did QA on top of their XML staging site for all of the things that needed to be corrected in HTML. In two months, they made more than 10,000 annotations, so we're working with them to see how the tool can be better optimized for them for the next round of content.
Another use case from the American Society of Microbiology, they were migrating about 15 journals from one part of Highwire to another, and they knew that they had more than 200 landing pages that were going to have to change as a result with links and text, so they created a group and editorial, and they actually marked up their pages on their website using Hypothesis,
and then they were able to keep track of the changes that they had made and the changes that they still needed to make, so I thought that was a pretty cool use case. And finally, just one slide about analytics, because I know we're going to get into this much more in the question period. If an annotation is made that's publicly visible, the publisher gets through the API, or we can do it as a download,
who made the annotation, when they made it, on what document, what they selected, what they wrote, so all that information is available. For private and private group annotations, of course you can't see who made them, but you can see which documents had activity and when,
so that's an indicator of engagement, and we provide full statistics back to the publisher about the growth of annotations over time, most annotated documents and the like, so lots of information there.
Okay, thank you. We'll continue with Arvind, who's from Europe Open Central. Thanks a lot for inviting me for the panel. Today I will be talking about what EuroPMC does towards annotations
within the framework of research workflow, and how publishers could take advantage of the work that we are trying to do. A little bit of background about EuroPMC,
so we are a digital archive for biomedical and life science research publications. Apart from providing content for searching for articles, we also try to reflect the complexity of the research articles,
because I think we all can agree that a research paper is not just a description of scientific facts, there is a lot of data behind papers, for instance author affiliations, funding information, the actual data cited in the paper,
so we try to reflect this complexity by providing services via APIs to extract this information. So, EuroPMC has a wide user base, but particularly what we are currently working towards
is literature data integration with use case of curation. This also plays well because of the environment we are in. EBI, as you might know, hosts a lot of databases,
and there is a lot of curation work going on. Curation is a high-value task where curators read papers, make annotations of the scientific assertions, and when you look at the tremendous growth in the number of publications,
it's a cumbersome task. In fact, with my personal association with a lot of curators, I have really tremendous respect for them, because it's really painstaking work. So what we are trying to do is to aid and help them in identifying facts
described in the papers. So for us, within this framework, annotations would mean that entities like genes, proteins, tagged via text mining and data mining algorithms.
So what we have done is we have set up an open annotation platform. So I'll go through the workflow in detail. So what we have is we have opened up the annotation platform, where we also provide entities to the platform,
and also we accept annotations from various text mining groups who want to share their annotations. So we integrate them into the platform and make it available via APIs. Through the API, databases can fetch this data
to link the data that's mentioned in the literature to the relevant databases, and also other applications could consume this data to compute on. And we also consume this data via an application called SyLite. I'll talk to you about it a little later,
where we overlay the annotations on the articles on EuroPMC. So currently we have eight providers, EuroPMC included, and we cover a wide range of different annotation types, and currently we have close to 500 million annotations,
and this covers genes, proteins, diseases, organisms, database accession numbers, gene functional statements, gene disease associations, molecular interactions, to name a few.
So all this data is available via our API, which includes various formats, and users would choose that suits them. And it's a modularized API, and users could search the data based on articles or entity mentions
or provider-specific queries based on relationships and also section impact. For instance, if someone wants to fetch data on chemicals that are mentioned in only method sections, they could do that. So this is an example of a result from the API.
So it gives you details about the article, the entity that's being tagged, and the link to the corresponding database, and the section information, of course.
So as I said, we consume the data ourselves for visualization purposes. So we hope for each article, we fetch the corresponding annotations for that article and overlay on the article itself. And it has an interactive user interface.
Users could click on the highlighted terms and go to the corresponding database. So what we have done, so far we have sort of set up this infrastructure and the main key aspect being scalability and sustainability.
So going forward, what we want to do is to use this data, for instance, to help articles react. And for this, we are currently conducting user research to understand how curators curate data. More so, it's more of an observation study
as to what decisions they make to select a particular paper that they want to curate. Is it based on just the title? Or is it experimental methods? So try to understand what are the procedures they follow
and use this annotation data for triage. And also we are working on granular linking between the annotations. So what happens is often in databases, there are article references, but it points to the article
and not say the sentence from where the curator feels that this particular publication is worthy of curation. So we want to point to the sentence from where the annotations are being extracted. And the other aspect that we are working on is
layering human annotations over automated annotations. Say, for example, this would help create a curation platform where curators could comment on a particular annotation. There is a snippet saying that a particular gene is associated with the disease
and the curator wants to give some comments on it. You know, that would be also very interesting and it would also add another layer to it in terms of credibility of that particular assertion and that could also be used for computing.
So this is what I wanted to share with you. Thank you. Thank you. It seems there are already two types of annotations. A simple start.
It is not Alex. You are welcome to give my talk if you want. No, I'll do it. I can improvise. That would probably be better than what I'm going to say. Okay, thanks. So I'm going to be talking about annotations and metrics on Cambridge Core,
which is the Cambridge University Press platform. In this little presentation, I'm just really going to set the scene about what we're doing in both of these areas ready for the panel discussion. A little bit about my background. My job title is Senior Digital Development Publisher, which is one of those fluffy job titles that tells you absolutely nothing about what I do.
Basically, I head up our digital publishing team and reach looks after our digital strategies across our academic books and journals. And we're the interface really between the platform team, because we have our own platform, as you said, and the rest of the business. And we also manage relationships with our third-party tools
and services partners, and my hypothesis, and our metric, and various others, which is one of the reasons that I'm here today. We do a whole lot of other things as well, but that's not relevant here, so I won't bore you with that. So by way of a bit of background, we've chosen to implement annotation on Cambridge Core partly as a result of incremental development
of our platform, which includes both our book and our journal contents, and then partly as part of our open research strategy. So it's really early days for us for annotation, but on this slide you can see some of the use cases that we are seeing or we're anticipating for annotation,
and they fall into these three categories. So discussion between various groups, whether that's between authors and readers, or lecturers and students, for example, and the research, for example, open post-application peer review, and then the one I'm going to focus on today, authors adding extra dimensions, extra detail, to their published research, published content.
So that could be things like related research, late summaries, information on practical applications, and so forth. So our first venture into annotation at Cambridge University Press has been annotation for transparent inquiry, which we're calling ATI for short, because otherwise it's a bit of a mouthful. And this is actually a finalist for the ALPS awards
for innovation and publishing this year. So for those of you who were at the ALPS conference a couple of weeks ago, you might want to zone out for the next two slides, because you've heard some of this before. And essentially, ATI is a collaboration between ourselves, Cambridge University Press and Hypothesis, and also the Qualitative Data Repository.
And it provides a framework for enriching qualitative articles and by allowing readers to follow authors' analytic processes and access their data. So really the premise is that it's relatively easy to make the data available and associated methodology for quantitative research, but today there's been a lot less focus on this area
in qualitative research, so really this initiative is intended as a step towards addressing that imbalance. Pardon me. So as you can see on the slide, the way we go about annotation for transparent inquiry is we use the hypothesis annotation tool as a collapsible panel alongside our content. And we work with the authors
to create these structured annotations. So each annotation on the content contains an analytic note, which is where the author explains why a particular data source was chosen or why a particular method was used. They provide a source excerpt, which certainly allows us to go into a lot more detail
than perhaps a quote in the text or often no quote at all in the text, a translation of the source excerpt if needed, because by no means are all of those in English, a link to the full data source in the repository and the copyright permits, and also a citation as well. And these data sources are not necessarily the things we traditionally consider as data,
certainly in the quantitative domain. So they give you things like audio clips, maps, archival documents, museum objects, all sorts of different types of material. So going back to the first question as to why annotation? Well, from an author's point of view, this is about making the work they've done visible
and showcasing all of the extra detail that they usually can't include in a traditional search article because of word length constraints. And it also helps them meet fundamental mandates or general requirements for making their search transparent as well. From the reader's point of view, it enables them to verify the evidence behind the author's claims,
and perhaps to replicate the research, but also reuse some of that raw material for new research questions as well. And then we've talked a few times in the last several days about how annotation and commenting might actually itself further drive engagement with the content. And I've been really interested to hear some of those comments from some of the speakers so far, because that really resonates with the experience we've had at Cambridge so far.
And so we launched ATI on some articles in April, and we've already started to see, very quickly after that, that usage of some of those articles double up in your actual citations. And we've also been told that lecturers are using these annotated articles to teach students the methodology of quantitative research.
And it's possible that the existence of these annotations might actually bring a citation advantage. And so there's some evidence in the quantitative domain that making data open, materials open, leads to more engagement and possible citations. And it's possible that you can see the same in qualitative research as well,
although it's far too early to say anything for the articles that you've got these annotations for at CUP. So that's annotation for transparent inquiry. We're going beyond ATI for all the reasons I outlined in that first slide, and we want to promote an annotation as part of engagement with our content more widely. So having partnered with Hypothesis for ATI,
the technological next step is then for us to extend the annotation tool across more of our content for consistent user experience alongside ATI. So we'll shortly be doing just that on our platform. And really, the reason there's links into our metrics, I guess, is because as part of that, we plan to add metrics
about the number of annotations on our content pages, on our article-level metrics, on our book-level metrics. We plan to do that in a couple of different ways. So as Heather mentioned, Hypothesis has different layers. So we'll have a layer for author annotations and also then this public layer in which anyone with access to the content
can comment or engage. And so we intend to include in our metrics the information about the number of author annotations but also the information about the number of public annotations. Ideally, as well, we want to include information about the number of private annotations, and that's Heather's science shift, that all of that with the intention
is that we can do that with the Hypothesis API. So really, I just want to finish on just mentioning that this is part of a wider review of metrics on our platform. So you can see here, there are some metrics in green, and those are the ones that we display right now. And then annotations,
I think when I made these slides, they were powerful, but apparently now they're blue. That's something that we're about to add to our metrics in the very near future. And then beyond that, we're thinking about what other metrics we should be displaying. Now, you'll see quite a lot of this is not necessarily our metric, but more traditional metrics, particularly in the journal category.
And really, this is driven by what we're hearing from our customers, particularly our journal society partners who publish a lot of society journals. And particularly in humanities and social sciences, they're saying that the impact factor, as we all know, that there are issues with that, and they want to see other things that are more appropriate in their disciplines.
And then there's some specific progress we're getting for things like the five-year impact factor, XJR and so forth. In line with the principles of DORA, not only do we want to provide more journal metrics, but also enhance our article-level metrics. And CORE is very much about bringing our content together. Article books are also our new article
and then book hybrid format called elements. And so we don't want to treat books as second-class citizens. I personally come from a books background, so that's not an option for me. And so I guess on the heels of, well, a few years later, there's very nature really, we're really thinking about what else we can do
to really enhance our book metrics. And I guess a few other things that we're suggesting here are kind of aligned with what book metrics are doing. So we've been playing copycat a little bit, but we think that it's important to show a number of different types of engagement with our content for the purposes of our authors and for our readers,
for our journal publishing partners and so forth. At the moment, the reason these are in amber, they're under consideration, like I say, and we're conducting some research with our authors and our readers to find out which of these or other metrics they would want to see, which of them they've heard of, which of them they understand. And we'll go from there to determine the next steps
in terms of how we enhance our metrics. I'll finish by saying that when we do enhance our metrics pages, the intention is to also beef up our explanation of metrics which we host centrally on our platform and we will link through from all of these to that explanation so that people can understand not just there's this number, but where does it come from,
what's the data source, how has it been calculated, how do we use it, what does it mean to me. So I guess that's some context for the discussion, and I'll stop there. And now reading the last speaker is Alex from PaperHive.
My name is Richard Rock. Hello everyone. I'm Alex from PaperHive. I'm one of the co-founders. Today I will speak about three things. A bit about PaperHive and what we do. Then second thing, what are the biggest challenges to any sort of annotation approach
for publishers mainly. And the third thing, I will try to give a bit of input for this discussion afterwards, focusing on the potential benefits and of course disadvantages of annotations as a metric,
as a potential metric. So PaperHive, challenges for the annotation approaches and pluses and minuses of annotations as a potential metric. So some context. As you all know, collaboration in academia has basically permeated all of the phases of the research
of workflow throughout the last decades. Reading has been one of the last ones to get a bit more of collaboration in it with different sorts of approaches. For example, collaborative literature management or discussions. And this is one of the spaces
where PaperHive, scholarly collaboration network, has been really active for the last few years. So PaperHive is a scholarly collaboration network connected to publisher websites and it enables researchers to manage, discover, discuss, annotate their literature,
whether it's a publication or a book, anything possible. How it works? You usually have a small widget on the publisher website. When you click it, the PaperHive website pops up like a reader and in this reader, you can do all those wonderful things. If you don't want the widget, you can skip it.
The relevant part for a publisher point of view is that the content is not put on PaperHive. It's not a separate place where you need to store it, where you need to track copyright compliancy. No, you don't need to do this. You're connected to the server of the publisher and everything is pulled in dynamically. For example,
if the user is authorized to do so because they're at the solace. If they're not, they probably won't be able to see the content. There are three pillars of PaperHive, three main functionality approaches that we have.
Communication, documents and people. Communication, we allow public and private discussions on the version of a record. This means you can interact with the entire world within the margin of a publication. You can also just do it with your colleagues at the university.
This is all reflected on the stats of the publisher. Second thing, documents. We are a platform that allows you to search across different publishers and find different sorts of publications for your research. We also allow, for example,
literature management so that you can just have your papers to discuss and read with your colleagues. Then, people. We do have everyone who is a sign-up user, for example, their Orchid account, their Gmail account, who sign up for PaperHive,
has a profile, so the way we connect communities and also we power up teams. Does anybody use Slack or another business messenger? I guess quite a few. We do support PaperHive channels. We took the name from Slack. It's a great name
for this sort of communication on a specific topic, so we do support PaperHive channels where you can have your colleagues, where you can have your documents and all of your discussions. There are several interesting use cases
that we power. Some of them are related, for example, to the general use cases that Agnieszka and Heather talked about and Aravan talked about. There are some specific ones to PaperHive. For example, we do auto exchanges. It's more of a general thing. I'm showing an example of a book
that is being discussed between the author and the students. Then there's another example. You can use PaperHive for interactive teaching and lectures. This is actually a spring in nature textbook and it has seen some great usage
both in terms of downloads and discussions, basically helping the professor to highlight important elements, to ask questions, or to receive questions by the students. One use case that I don't think we have heard today and that is pretty active on our site is called Community Proofreading.
Community Proofreading is something that a linguistics publisher from Berlin does. They invite a bunch of people, 20 to 30 people, to one book and everyone gets a chapter and proofreads it. We've seen some crazy usage on books. I think our current record
is something like 2200 comments per one title, which is massive. At some point in the past, we had to rewrite part of our backend in order to support this use case. This said, there are many challenges to annotation approaches. Such massive usage you won't see
on all the titles, you won't see it. Most of them, we do see some sort of a long tail statistic as far as publication usage is concerned. So what are the important things you have to have in mind before jumping into the
annotation back and forth? What does it work with? Back and forth. First of all, you would be in the best position to go forward with this. You already have some active community that would like to interact or you have a clear goal in mind. For example, in this case
with the students, there's a professor wanting to discuss a certain book. Or in the example of a publisher, they simply wanted to do a community proofreading and then open peer review. Number two, some time commitment, and actually some large time commitment
and coordination effort is expected from the editorial and marketing teams of a publisher. You cannot simply integrate the paper-five widget and get your content inside. You just expect a lot is going to happen. The same I guess is true for other providers. And this is of course
a blessing and a curse at the same time. A lot of commitment is necessary and if you're not in the position right now to do so, you'd better wait for some more time before you get there. And then another big one, not all annotations are created or equal.
Heather actually gave a great description of the different types. And this is a fact, not all annotations are some incredibly smart, high quality science breakthroughs. Many of them are type discoveries or okay, change this or some
not that relevant questions. And I am certain anybody who would like to go forward with annotations in order to power out their journals or their journals does not necessarily want to just have some typos discovered. So this is one thing to consider and all these major challenges
and some of the others combined should be an important part of your decision. Then my last oh yeah, my last if you decide to go forward with this and if you can say yes we are ready to do so, then you certainly can have some major benefits.
These include okay, the first one is more of a feature specific for us. You can easily integrate with our system. Of course, no change to publish your platform is required to simply get your content viewed through it. Potentially if you move forward you can
see increases in your usage and the impact of the research. You do have control over your usage data simply because it is still hosted where it was before your service. You are offering the service to your authors which is great. And then you can also stimulate copyright
compliance sharing. I would like to finish my presentation with a very quick look at some of the pros and cons of annotations as a usage metric. I hope that we are going to go deeper in the discussion afterwards.
But these are the few ones that were actually consulted by Martin's team a few months ago. First of all annotations are an evidence that a user is actually engaged with the content. A download as a comparison or in contrast is not necessarily
doing this. A download is a download, but it doesn't mean that I have read something. An annotation means that I am interested in this. Much higher probability that I should be engaging with the content. Second benefit, second plus annotations are very granular in document
metric. Think about page 5, the second paragraph. If I am spending time there, or if I am adding some thought there, then it means that this paragraph has some relevance. So this, although it is further down the line in the future of scholarly communication, can be potentially very very powerful. Think
some sort of heat map. And then number 3, an important one I would say you actually get very strong insights into the reader community background. I think segments are students mostly using a particular piece of content as researchers of a specific
community. Are they surprising interdisciplinary researchers who are engaging and asking some other types of questions? Two important cons to this advantage, I would say, are the efforts to collect and sanitize the data is very big, unfortunately.
Fortunately, there is already work done in the direction of standardizing annotation formats. We as well as all the organizations on the table are being a part of using standard W3C format for their structure of the annotations meaning that in the end this can
be much easier collected, standardized and potentially used since auto-metric. I already mentioned number 2 and last, the variable quality of annotations well then discovered typos in an article are definitely not as valuable as one great insight
into, like one comment with a great insight into research. So with this I would say thanks a lot for the thank you a lot for the annotation for the attention and everything's annotation to us If you have any questions this is one of the e-mails
that you can use two things that are actually relevant right now for us is the Beyond Download industry consultation group that we started with several publishers including CFP, Thinkonage, etc. which goes to actually go one step further to allow users
to upload their content to private groups and discuss it with others without however denying the publisher the chance to see any sort of usage around this content. So this industry consultation group is still open to members of his post lab
if you're interested happy to talk to you about this later. Right, thank you.