We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Altmetrics and open science

00:00

Formal Metadata

Title
Altmetrics and open science
Title of Series
Number of Parts
8
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
- William Gunn, Mendeley – Understanding reader intent: Who are the readers of a papers on Mendeley and why do they add what they add to their libraries? With an open catalog of millions of papers and applications used by millions of researchers to organize, share, and discover research, Mendeley is one of the most comprehensive sources of altmetrics available. However, there’s a lot that still isn’t known about the underlying population from which the metrics are derived, and the various behaviors that they engage in as they build their Mendeley library. We are collecting the largest sample of Mendeley reader behavior and will make the survey data available to the altmetrics community. In this talk, we will describe the data collection, as well as present some of the highlights of the data so far. - David Walters, Brunel University – Institutional services and altmetrics as drivers for a cultural transition to open scholarship The causal effect to the impact of scholarly outputs disseminated under an open model may be mirrored in the statistical analysis provided by Altmetrics. In the wake of technological developments and funder expectations, we at Brunel University London have a longstanding commitment for open access to our research outputs, going back over ten years. A single campus, community focused institution, our services and systems been tailored to support our scholars and effect cultural change during the transition toward open scholarship. I will talk about how the evolution of our systems deployment has led to a support network that facilitates University publishing for new, open forms of scholarly output and that enables the monitoring of traditional published outputs through green, gold or paywall distribution models. Our publishing systems include an Institutional Repository (IR) and FigShare Data Repository. Our Current Research Information System (CRIS) provides the mechanism to monitor publication trends across our entire portfolio. With the monitoring of open academic activity fully supported by the CRIS along with partner services Cottage Labs, DOAJ, Sherpa and Core, I will outline how we developed a small centralised service around these tools tailored to foster engagement and to transform dissemination practice across our community. Alongside the proliferation of social tools for researchers has been the growth of alternative metrics. From our services at ground level we are well positioned to comment on the divide that exists between those researchers who actively use their social media networks to promote the discovery of their output and those who don’t. I will discuss our ‘Altmetric for Institutions’ setup, which monitors the records held in our CRIS. I will demonstrate how this information is shared with authors, research managers and marketing to benefit different areas of institution, but in particular how this provides a powerful visual prompt to users of our service who may be unsure about the academic return of the open movements in real terms. We are beginning to see how the data we work with every day could be used to extend the discovery of our academics work and to promote the institutions reputation in this space. We curate a huge range of high quality metadata within our CRIS; keywords, subjects and themes to name but a few. We want to see better ways of using this data to select and promote our publications across the social sphere – ideally making use of and developing our existing local networks and the networks of our researchers, and I will speak to our progress in this area so far. I will conclude by arguing that the clear, mutualistic relationship between the altmetrics and open science movements necessitates effective co-operation with local university services to bring about a smooth and swift transition for authors to the open scholarly model. - Kim Holmberg, University of Turku, Finland – Measuring the societal impact of open science Altmetrics, which refers to both the research field and the altmetric events being investigated, has the potential to provide a more nuanced understanding of where and how research has had an impact. For instance, while citations only reflect scientific impact, tweets, blog citations, Facebook likes, and so on may be able to reflect other types of impact such as societal impact on various audiences. Altmetrics is also closely connected to the Open Science movement, as both have emerged from the transition of scholarly communication to the web. In addition, while the open science movement still lacks incentives for individual researchers to adopt some of the ideologies of the movement, which in turn hinders the rapid assimilation of it, altmetrics provides novel indicators for attention, visibility, and impact that could stimulate the interest of researchers. As researchers see how their work is gaining attention online, it might motivate them to share more of their work openly. Our study will present preliminary results from a new research project (financed by the Ministry of Education in Finland) that works at the intersection of altmetrics and open science. The project investigates the connection between altmetrics and open science, maps the current state of research in Finland using altmetric research methods and data, and develops new indicators to measure societal impact and new methods to study these phenomena. Our preliminary results on the connection between altmetrics and open access publications show a significant difference between altmetric indicators based on number of mentions of open access publications in their content. Open access articles receive more attention on Facebook and (especially) on Twitter compared to articles that are behind paywalls. The opposite appears to be the case for Mendeley readers and (to some extent) for Wikipedia posts, as open access articles are not used as much as those that are behind paywalls in these contexts. The preliminary results are based on an analysis of how almost 4 million altmetric events are spread between articles in open access journals (as listed by DOAJ) and other journals. The results support the assumption that Twitter especially might be able to reflect the attention of a wider public, while Mendeley is used mainly by researchers. In addition, the preliminary results also suggest that a great deal of the Wikipedia articles are in fact written by researchers with paid access to research articles, providing additional evidence of the high quality of Wikipedia articles.
Open setBitLecture/ConferenceMeeting/Interview
BitDifferent (Kate Ryan album)Lecture/Conference
Time domainPoint cloudInstallation artDifferent (Kate Ryan album)BitComputer clusterCartesian coordinate systemMobile WebInteractive televisionInformationRepresentation (politics)Multiplication signProfil (magazine)LaptopAndroid (robot)MetadataProbability density functionLibrary (computing)Windowoutput
Arithmetic progressionRemote Access ServiceStudent's t-testSound effectCellular automatonComputer-aided designStatisticsMechanism designLibrary catalogWeb pageVolumeGroup actionAnalogyLibrary (computing)Arithmetic meanAbstractionFingerprintRule of inferenceRaw image formatAdditionMetadataContext awarenessEvent horizonForm (programming)Different (Kate Ryan album)Covering spaceCompilation albumStatisticsAuthorizationAdditionMetadataStandard deviationInformationCross-correlationIdentifiabilityResultantSelectivity (electronic)Computer animation
Cross-correlationBitMetric systemComputer animation
Plot (narrative)FacebookBookmark (World Wide Web)Probability density functionDistribution (mathematics)Cross-correlationSquare numberSocial bookmarkingWeb pageInformationQuicksortMetric systemView (database)Computer animation
Sign (mathematics)GenderStudent's t-testAssociative propertyPersonal digital assistantStaff (military)Different (Kate Ryan album)Range (statistics)Engineering drawingComputer animationDiagram
WebsiteLibrary (computing)AdditionGroup actionInformationLibrary (computing)Metric systemSet (mathematics)
Speech synthesisSelf-organizationSimilarity (geometry)Lecture/ConferenceMeeting/Interview
Goodness of fitPresentation of a groupGroup actionService (economics)Sheaf (mathematics)Order (biology)Open setMathematicsMetric systemSoftware developerLecture/ConferenceMeeting/Interview
Social softwareService (economics)Digital signalSpherePoint (geometry)Physical systemSoftware developerMathematicsService (economics)Local ringEvoluteSound effectGroup actionMetric systemHypermediaData managementOrder (biology)MereologyContext awareness2 (number)Computer animation
Service (economics)Green's functionContext awarenessOpen setTraffic reportingUniverse (mathematics)Service (economics)ImplementationGreen's functionComputer animation
Computer networkService (economics)Physical systemBlock (periodic table)NumberFocus (optics)AuthorizationGroup actionDifferent (Kate Ryan album)Model theoryPosition operatorAreaCentralizer and normalizerElectronic mailing listWeb serviceSystem administratorPhysical systemBlock (periodic table)ImplementationOpen setComputer animation
Service (economics)ExplosionMiniDiscData managementOpen setNormal (geometry)Service (economics)Group actionBijectionTraffic reportingStaff (military)Modal logicProcess (computing)System administratorComputer animation
System programmingSelf-balancing binary search treeSkewnessService (economics)Information Technology Infrastructure LibraryMultiplication signPhysical systemPoint (geometry)Task (computing)MathematicsStaff (military)Group actionComputer animation
Local GroupDatabaseSystem programmingTime evolutionPhysical systemSoftware developerEnterprise architectureDatabaseLocal ringDiagramImplementationFunction (mathematics)Information systemsRight angleElement (mathematics)Ocean currentService (economics)Touch typingAreaKey (cryptography)Level (video gaming)Self-organizationComputer animation
Element (mathematics)System programmingTime evolutionComponent-based software engineeringPower (physics)NumberFunktionalanalysisPower (physics)Web pageWindows RegistrySource codeCartesian coordinate systemLibrary (computing)DatabaseComputer animation
Time evolutionSystem programmingWeb serviceFehlende DatenPhysical systemService (economics)Key (cryptography)Level (video gaming)Process (computing)Open setWeb serviceEnterprise architectureBuildingAdditionSoftware developerCore dumpEvoluteComputer animation
Data analysisScalabilityProcess (computing)Data managementSystem programmingTime evolutionSource codeData managementService (economics)Web serviceScalabilityInformation privacyProcess (computing)Level (video gaming)System administratorDimensional analysisDifferent (Kate Ryan album)AreaMultiplication signSource codeOpen setComputer animation
Data managementSource codeSystem programmingTime evolutionData structureReal numberData conversionSource codeDependent and independent variablesLatent heatWeb serviceTwitterUniverse (mathematics)Row (database)Real-time operating systemArithmetic meanDatabaseData structureComputer animationProgram flowchart
Time evolutionSystem programmingData structureInformationService (economics)Web serviceLibrary (computing)Software frameworkBuildingOrder (biology)Data structurePoint (geometry)Web 2.0DatabaseOpen setStatement (computer science)Computer animationProgram flowchartDiagram
System identificationTime evolutionSystem programmingModel theoryData managementACIDMach's principleNetwork topologyAreaAuthorizationTwitterEnterprise architectureResultantOpen setUniverse (mathematics)Group actionLatent heatComputer animation
Commitment schemeEstimationData managementError messageGauge theoryOpen setMechanism designProcess (computing)Computing platformGauge theoryService (economics)Open setGroup actionComputer animation
Link (knot theory)Social softwareUsabilityFluxObservational studyUsabilityMetric systemContent (media)Line (geometry)Sheaf (mathematics)Group actionOpen setLink (knot theory)Web 2.0Computer animation
Open setObservational studySocial softwareDigital signalComputer networkHand fanSystem on a chipUniqueness quantificationTwitterService (economics)Sound effectBlogProcess (computing)DivisorNumberHypermediaSoftwareDigital electronicsExistenceMereologyBasis <Mathematik>AreaComputer animation
Digital signalComputer networkSocial softwareLink (knot theory)Coma BerenicesTwitterEnterprise architectureBlogLibrary (computing)BlogSlide ruleMetadataLevel (video gaming)Service (economics)Row (database)Staff (military)Universe (mathematics)Web serviceHypermediaComputing platformMereologyBuildingStandard deviationComputer animation
Service (economics)Function (mathematics)Open setService (economics)Group actionMetric systemComputer animationLecture/Conference
BitOpen setProjective planeLecture/ConferenceMeeting/Interview
Observational studyInformationOpen setMaxima and minimaProjective planeOpen setResultantView (database)Connected spaceElectronic mailing list
Process (computing)Open setMathematical analysisMultiplication signOpen setStudent's t-testProduct (business)Process (computing)Bus (computing)Right angleQuicksortComputer animation
Software developerMathematicsCross-correlationCountingPhysical systemModal logicOpen setTelecommunicationRootWeb 2.0Open sourceAdditionCASE <Informatik>Group actionComputer animation
Range (statistics)Pressure volume diagramSound effectOpen setMetric systemComputer animationLecture/Conference
Pressure volume diagramRange (statistics)Function (mathematics)Open setMixed realityOpen sourceComputing platformEvent horizonComputer animation
Computer-generated imagery
Event horizonMathematical analysisOpen setEvent horizonObservational studyResultantOpen sourceElectronic mailing listDirectory serviceInformationComputer animation
Computer-generated imageryAverageEvent horizonLevel (video gaming)Peer-to-peerBlogGoogle+Event horizonOpen setTwitterNumberAverageExtension (kinesiology)Pairwise comparisonPrice indexLibrary (computing)FacebookMatrix (mathematics)AuthorizationComputing platformSlide ruleDiagram
Installable File SystemBlogPeer-to-peerEvent horizonAverageTwitterFacebookPrice indexResultantTwitterComputer animation
Multiplication signComputer animationLecture/ConferenceMeeting/Interview
Observational studyTerm (mathematics)ResultantOptical disc driveOpen sourceMetric systemOpen setFunktionalanalysisFile archiverMultilaterationDivisorField (computer science)Lecture/Conference
Event horizonMultiplication signObservational studyDifferential (mechanical device)TwitterFacebookLecture/Conference
Lecture/Conference
Dimensional analysisSampling (statistics)Calculus of variationsTerm (mathematics)Mathematical analysisSet (mathematics)Entire functionProcess (computing)Physical systemMultiplication signOpen setSound effectMeeting/InterviewLecture/Conference
Different (Kate Ryan album)Multiplication signMoment (mathematics)Hydraulic jumpService (economics)Term (mathematics)Order (biology)Level (video gaming)Expected valueLecture/Conference
Service (economics)Universe (mathematics)QuicksortTerm (mathematics)Lecture/Conference
Physical systemGroup actionBitSoftware frameworkLecture/Conference
FunktionalanalysisShape (magazine)MereologyMultiplication signProcess (computing)Social classPhysical systemUniverse (mathematics)Form (programming)Order (biology)QuicksortLecture/Conference
Roundness (object)Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
Thank you so much for coming to the session on Altmetrics and Open Science. We are starting a little bit late. We are not going to finish late. So what I'm going to propose to do is to keep the speakers to ten minutes each
and have a collective question and answering session after that so we don't run over and no one gets less than the ten minutes they've been allocated. So my name is Mike Taylor from Elsevier and I'm very pleased to introduce to you William Gunn. William is going to talk about Mendeley readers and who they are and why they are doing it.
And William has ten minutes. So we've been talking a bit about all of these different kinds of activity data.
I'm going to present and describe to you a little bit about... Oh, we can hear it? I probably should move it a little bit closer. Is this a little bit better?
No? Huh. I need... Okay. Can you hear me now? All right.
So we've been talking about various different kinds of activity data. What I'm going to describe to you today is some of the nuts and bolts of Mendeley data and what you can do with it, what you shouldn't do with it, and what are some of the questions that are still remaining out there about it.
So just as a brief refresher here, the way that you get Mendeley readers is a researcher will install the Mendeley application, they will add papers to their library, and when they do that, we are capturing information about who they are, their profile as a researcher
as much as they've described it to us, and what documents they're adding. And each time you add a document to your library, the reader of that document goes up by one. We take all of this activity from the various applications by which someone interacts with Mendeley via a desktop, laptop, mobile device, Windows, Mac, iOS, Android, Linux, and we
compile all this stuff into what we call a canonical representation of a document. So when it adds a PDF, we pull out the metadata, we cluster that document with all the other copies of that document that have been uploaded by other users in all of their various forms
with, you know, different cover sheets and annotations and whatnot, and compile statistics about who's uploading those documents, connected with the author if we can, various different
identifiers for where it's found, et cetera, et cetera, and you can search the catalog and pull out this kind of stuff. It's actually kind of nice because we can use this readership information to feed into the standard, you know, matching algorithm, and we like to think deliver a little more relevant results for some things. So in short, a readership is an addition of a document to Mendeley.
It comes with some metadata. It kind of feels like a citation in some ways, but you have to understand that the underlying population is different. There are people using Mendeley for all kinds of things, and so it's not going to behave exactly like a citation. In fact, if you look at the correlation of a selection of papers from PubMed with
their citations in Scopus, what you find is, you know, there's really not any correlation at all. If you think about it, you shouldn't really expect to find much of a correlation. Looking at other all metrics, you see that things look a little bit more like there's
a relationship between the two. This is downloads of papers in PLOS versus readers on Mendeley. And when you break it out by discipline and you limit it to year, you can actually get some things where it looks like there's reasonable correlations in the data. So readers look a little bit more like downloads.
But fundamentally, and this is some great work done by Jason Pream at OWL, showing the darker the square is, the stronger the correlation is, that Mendeley saves. So these guys right here, really, you know, they're kind of like downloads or page views,
but really the only thing that they're very much like is themselves and other social bookmarking activity. So we thought, we need to understand not this paper-centric view, where you're looking at a paper and you're aggregating all this information about the paper, but we need to understand the underlying population. So let's take a population of people, you know, and understand what they're doing with
their documents. That is going to get us past this kind of, this sort of, you know, detour down the rabbit hole that we've gone with trying to correlate readership versus citations and resources, all of these other metrics, which hasn't really yielded all that much
interest to look at the underlying, to pick a population of people and look at the activities that these people share. So we have a population of people on Mendeley that we have curated. They are mostly early career researchers, as you can see here, mostly male. They are mostly Ph.D. students, doctoral students, and there's a smattering of professors
that identify variously as assistant or associate or lecturer or staff researcher or, you know, blah, blah, blah, and they're heavily weighted towards biological sciences, engineering, medicine, but there's quite a range of different disciplines that are, they're available
there. So we're character, we're going to provide, you know, a characterization of the demographics of these folks, and then we're going to select some of these folks and ask them follow-up questions such as, you know, well, we can query our database and find some information like this.
We're also going to ask them questions about intent, like when you added this paper, this particular paper to your library on this date, what was, why were you doing this? What action were you carrying out? You know, what were you intending to do? So we're going to collect all of this data, and we're going to put it together, and
we're going to make this data available as a data set for all metrics researchers, such as yourself, to use, to derive insights from, to study, and ideally to do better quality research and hopefully to kind of get us past some of the log jams that we
run into when we focus on the paper and all of the metrics surrounding it and not focus on the underlying behavior of the users. So we think this cohort will be a valuable addition for research in that respect.
That was awesomely less than 10 minutes. This is really exciting stuff. So my, as one of the organizers of 2 AM, I get to do this rather cool thing of when I see someone speaking at another conference, and I think they're really interesting, I get to invite them to come to another conference, and I saw David speaking on some
of the work he's been doing in supporting communicating open science in his institution in Brunel, and he's going to be giving a talk on a similar subject today for us. David, thanks very much. You have 10 minutes. I have post-it notes.
Okay. Good morning, everyone. I'm David Walters, and I work for Brunel University London.
My presentation today is titled Institutional Services and Altmetrics as Drivers for a Cultural Transition to Open Scholarship. I'm going to talk about the services being developed by institutions to foster a cultural change in our local communities. I'm going to talk about new services that we believe are required in order to swiftly bring about this transition.
In the first section, I'll begin by discussing institutional services as drivers for open scholarship. I'll talk about the development of services at Brunel geared toward affecting cultural change in our local community. Next, I'll talk about the evolution of our systems deployment that's facilitating
this transition, and finally, I'll discuss how our services are being built around effective data management techniques in order to empower our role as advocates for change. In the second part, I'll talk about how social media and altmetrics are rising to prominence in the kind of services we're delivering, and I'll talk about
our aspirations for future services within this context. There's been an explosion of institutional services at UK universities in support of researchers and open access, and a somewhat popular phrase for us has been the idea of a post-Finch world, referring to a report advocating a gold-led approach to open access
in the UK. Some key milestones for us have been the implementation of the Ask UK and COAF open access policies, and also the HEFC policy, which mandates green open access for research assessment. And what this has done is prompted the implementation of support services at UK universities.
For example, here at Brunel, we have a dedicated centralised support team, so we're now in a position to serve our academics across a number of different research support areas. Effectively, we're kind of like a frontline service, really helping our authors deal
with a transition to new models of publishing and the scholarly model, but as you can see from the list, only one of those is open access, and that's what I'll focus on here today. So for open access, some key milestones for us have been the early adoption of an institutional repository some years ago, setting up a fund for gold open access for all our authors
since 2008, the implementation of a CRIS system back in 2010, and the ongoing administration of the Ask UK block grant, which first came to us in 2013. So fundamentally, our role has been to lead our community as advocates through a culture
change, and open access week is a great example of this advocacy role in action. But this kind of one to one advocacy work should happen far more regularly than it does. Our services exist to foster engagement with our communities, and they work to engender open dissemination practice as the established norm.
We're also facilitators of open access, serving those necessary aspects of administration, making reports and recording payments, for example, as facilitators are tasked with broadly split between reporting requirements to stakeholders and request-driven processes, and the challenge
for us has been to minimise the impact of these processes, especially when staff resources are low. So in delivering this transition, both roles are necessary, but facilitation has up to a point in the dominant activity in our service delivery.
Being overburdened with administrative tasks has suffocated our role as leaders and advocates, and only advocacy is going to bring about the true cultural change we all wish to see, and what's still required is the systems in place to rebalance these roles for support staff, and more importantly, to remove the burden from academics so that time that could be spent on research is not unfairly taken away.
The implementation of a CRIS system has been vital for the development of our research support services, and at Brunel we use symplectic elements. CRIS stands for Current Research Information System. It's essentially a database used by institutions to monitor academic output alongside
local business systems, and you can see the diagram on the right, if you've got very good eyesight, gives a simplistic overview of the architecture employed by the system on a local level, and as you can see, all the feeds drawn by the system touch all areas of the organization, but primarily this database monitors research outputs
and associated research activity. Other key functionality is the way it brings data in from outside sources. Orchid, bibliographic databases, and altmetrics, for example. It connects with our IR and our Figshare data repository, acting as a registry
for all research activity at the institution. And it's already had a number of business applications. It helped us to do our ref submission back in 2014, and it also powers a number of our web pages as well. However, this system was not designed with our developing research support services in mind.
And notably for us, some key community-driven data resources seem to be missing. For example, Core, the repository aggregator, and Open Article Gauge, the automated license checker. So in the next stage of our evolution, this is where our service development stepped in.
We came to realize there was a need to build additional processes on top of a CRIS architecture to support our service requirements specific to the needs of our community. So we implemented a CRIS-led data management approach to support the facilitation aspect of our services.
We wanted to manage our data more effectively by making best use of these existing data sources. We did this because we wanted to curtail administration time and give scalability to this aspect of our service requirements. We wanted to simplify the administration process and achieve a greater level of understanding of our publications data over different dimensions,
particularly, of course, in the area of open access dissemination. So our first approach was to try and apply some of these missing data sources to our institutional record. And this gave our data the required depth to monitor dissemination trends
over the university's portfolio. And this in turn informed our own service response in real time, enabling evidence-based conversations with specific departments and a means for targeting our own limited human resources more effectively. So we also created a small database to structure funding
and information requests into a coherent shared workflow. And again, this is a demonstration of new services being built around a CRIS framework in order to better serve the needs of our academics. The point is service building, which is something the library is very experienced in and has come to do very well.
We're also proactively identifying papers that are under an open access mandate from the acknowledgement statements found in the Scopus and Web of Science databases. And this collation of techniques has enabled a greater understanding of the needs of our authors thanks to our ability to analyse their publishing habits and our ability to identify trends
in this area for the institution. And this is an example of some of our results. So you can see a breakdown of open access dissemination trends across the university's portfolio. And thanks to the architecture employed by the CRIS, we can direct this approach towards specific departments
or research groups within our community. It's also allowed us to develop mechanisms for effectively managing complex financial processes. And we're also helping to improve the array of new data services that are emerging to support the transition to open access.
So for example, we're now working with the open article gauge CHAPS, the automated licence checking tool, to improve the performance of the tool against the publication platforms chosen by Brunel academics. Moving into the next section, I'll just touch on the clear mutualistic relationship
that exists between alternative metrics and the transition to open scholarship. It's true to say that a link to a publication is more likely to be shared if it resolves to a resource and not a paywall. And open access brings scholarly research in line with expected usability habits of web content.
And at the risk of appearing to ingratiate myself to the organisers, this is some research taken from the Altmetrics blog. As you can see here, essentially open access equals more uniques on Twitter when a paper is shared. So in effect, this is something that academics really do need to know more about
and somewhere where institutional services may yet play a role in supporting this. Dealing with researchers on a daily basis, we're well positioned to comment on the divide that exists between those who are engaged with social media
as part of the dissemination process and with those who aren't. So there are a number of great tools already out there that assist with this process. However, success is largely dependent on researchers actively building their own digital networks. Obviously, you can't exploit a network to disseminate your research if one doesn't exist.
Melissa Terras is an academic from UCL and she's done a great job in documenting her experiences in this area. She's shown how academics can reap huge rewards from this process but also illustrated the effort required in building an online persona. The university and library service in particular
are becoming quite experienced at engaging communities through social media. And you can see from the slide, examples include our research blog and Vine channel that we're working to develop and we're not alone in some of these initiatives. But what about university services moving toward developing social channels around research themes?
Records metadata in the CRIS is curated to a very high standard. It's also possible to build research themes around this data as was done for the REF. So why can't staff use this to form part of an engagement platform? However these services develop, it's clear that we can play a role here, promoting research alongside our institutional affiliation
and possibly helping our research to achieve a greater level of impact. To close, institutional services play a vital role in affecting the transition toward open scholarship and open science. And related to Altmetrics, institutional services can play a much better supporting role for researchers in the social dissemination
of their outputs. And that's the end, thank you very much. Thank you very much. I'm really happy so far, I've not had to use my STFU post-it note.
The next speaker is Kim Holmberg who's going to be talking about measuring the societal impact of open science. I've got about 11, 12 minutes so we've got a little bit of flexibility on here because William was so kind to finish short. Okay, thank you very much, Kim.
Okay. So the title of this talk, measuring the societal impact of open science, I understand fully that that's a very broad topic and I'm not going to give all the answers
on how to do it or why to do it and so on. But what I'm going to do is I'm going to briefly talk about a research project that we have going on that have just started. I'm also going to briefly talk about the background
to that project or to the questions we are dealing with in that project, our view on connection between open science and altmetrics. And finally, just quickly some very preliminary results from one of the ideas, one of the projects,
some projects that we are studying in this larger project. Anyway, this project is titled Measuring the Societal Impact of Open Science. It's financed by the Ministry of Education and Culture
by their Open Science and Research Initiative. So the ministry has a big push or trying to push open science, they have big initiative on that
and we have a tiny piece of that cake for this research project. Now, as I said, we are at the very beginning of the project and you know how it's always exciting to start a new research project, you have all these ideas. So we have a big list, we have 20 ideas on our board
on papers that we, well, I'm going to say that we are going to publish next year or so. And this is the preliminary results that I'm going to talk about from one of those ideas.
Now, open science, we were just talking this morning while we were waiting for the bus about how open science is much more than just open access and that many times it's more, perhaps even more important to share perhaps research notes
from the research process than the final product. Taking screenshots from whatever analysis from the tools that you are using and sharing that might be more beneficial, more valuable for students and PhD students
than the final paper that is published. And also to write in a style that anyone could understand. Now, altmetrics and open science, now both have their roots in the development of the web
and changes in scholarly communication. Now, although citations are still important in scholarly communication and in the academic reward system, technically they are perhaps not any more necessary. They are not needed anymore.
And perhaps altmetrics could bring some additional methods or additional, how should I say, complement the academic reward system in some way. As we had heard yesterday already,
it's important to remember that altmetrics are not equal to citations in any way, but could be a complementary source. But there were some talks yesterday that perhaps criticized all the research where we have tried to compare or find correlations
between altmetrics and citation counts. And I think it would be, on the other hand, very exciting to find one altmetric, whatever that might be, that would have a high correlation with citation counts. Because in that case,
that could potentially be useful to predict later scientific impact. But anyway, taking away the question of what altmetrics really mean and how they can be used
or can they be used for research assessment, there is also a side effect, in a way, when using altmetrics. And that could be the intense, what's it called, incentivizing effect of when researchers see what kind of attention the research
that they are sharing openly receive online. And that can incentivize the researchers to increasingly adopt the open science ideology and share their research outputs. But our challenge is, and will probably remain for a while,
that we do not exactly know who is creating those altmetric events or for what reason they are creating those altmetric events, why they are interacting with research outputs.
Now, we know that or can assume that on different platforms, some of the altmetric events, they are created by researchers, and on others, they are perhaps mainly created by the wider audience, the public, and on some others, they are created by a mix of them. On some, they could be,
we also have to take into account or think about the roles that we have. I could be a researcher, but I could use online source for personal things and not be a researcher when I'm sharing something.
But this is, or could be, of course, a richness also, because we all know that citations are created by researchers only, and that is why studying citations,
using citations-based research assessment, we can only say something about the scientific impact of that research. But we have no idea, we can't use citations to assess the wider societal impact of research.
Now, to the very preliminary results, we had a quick study on how the altmetric events are distributed between articles in open access journals and other journals and paywalls.
And these are based on an analysis of about four million altmetric events. And we compared, or we used the directory of open access journals as our source of information, whether the journal is open access or not.
So we trust that list for this. And here's what we found. So we took just the average number of altmetric events on the articles from a journal, and we see that open access journals
or articles, they have a slight advantage on, well, a big advantage on Twitter and slight advantage on Facebook, so more altmetric events.
On these platforms, sharing of open access journals or articles from open access journals attract more altmetric events in comparison to journals behind paywall, while the opposite was found, especially for Mendeley readers
and to some extent also for Wikipedia. But clearly Mendeley and Twitter clear indications of that. So what can we say about this?
Chances are that because Mendeley's, or the articles behind paywalls are used by researchers, hence Mendeley is pretty logical that the result was discovered,
while Twitter, the result might indicate that the attention comes from a wider audience. Now interestingly, this could also, but we have to dig deeper into the results, this could also be an indication of, how should I rephrase this,
additional evidence of the quality of Wikipedia articles, that they are increasing or more written by researchers who have access to those articles beyond paywalls.
To end, shameless self-promotion, thank you very much.
Kim, thank you so much. So on that, we have seven minutes before we break up for lunch, so nicely, nicely timed, thank you all the speakers. I do have a question which I am burning to ask, but first of all, I ought to check with the audience, does anybody have any questions for William, or indeed, okay, we have a speaker, we have two questions here,
I may have to wait. Yeah, just a quick note about the results, in terms of the prevalence of altmetrics as a function of the source. There is some recent study that came out a few months ago, and it shows that there's an open access advantage
in Wikipedia citations when controlling for impact factor and journal field, so it will be useful to compare the results. There's an archive pre-print that you can look up, and I'll be happy to discuss this later.
Yeah, so I also kind of had a question for Kim, which was really, really around the timing of the events that you were using in this study. Would it not be equally parsimonious explanation that Wikipedia and Mendeley events often refer to older articles,
whereas Twitter and Facebook are more likely to refer to more recent articles, and that might explain the differentiation as well. I'm just interested to know how that might, was or wasn't passed out.
Yeah, the data for this was from 2012 to 2014, so not way back, but. A partial answer. Right, are there any more questions for the three speakers there?
Okay, here we go, go on. It's more of maybe a fuller answer, or just to say that when you're looking at that, and you're grabbing the entire, like the entirety of everything that you got from Almetric, and you're looking, you need to really break it down by discipline,
by your publication, basically by every dimension, because you're the size of the samples in terms of how much is, how many biomedical things you have versus how many social science things. If you're looking, if you try to grab it as an entire set, it's hard to draw any conclusions from that, because there's so much variation by all of the different dimensions that you could split that analysis. So I know that that's very preliminary,
but I wouldn't draw too many conclusions from around whether there's an open access effect there or not, and I don't think we can say that unless you start really looking and breaking it down. So I think I'm gonna have, yeah, I've got enough time to ask the question before I have to tell myself to STFU.
David, I had a question. There was some research in the Netherlands a year or two ago that suggested that researchers are spending almost a third of their time doing paperwork for grants, and I was wondering if you had any ideas on how REF has been influencing that,
whether people are spending more time on paperwork, whether your system is helping them to do the job quicker and spend more time in the lab. And now I can't see you. Where are you? Yeah.
Yeah, thanks, Moi. I think that's absolutely a really big problem that's facing researchers at the moment, is all these different mandates and requirements, hoops that they're expected to jump through, sticks versus carrots, are really fundamentally taking time away from the business of doing world-class research, and that's really why there is a need for services
like those we've established at Brunel and other institutions as well, to really try and take that burden away from academics. In terms of the REF, I mean, that for us, that's over and done with, but there's still a multitude of things that researchers are having to do
in order to apply for grants and to meet the expectations of funding bodies, and that's really where we're trying to focus our efforts in trying to take away some of those pressures, because they've got quite a lot on their plates. Yes. Yeah.
Not at this stage, no, but I think it's because they're all quite new. I mean, all the services at Brunel are quite new, the services at all UK universities are quite new, so I think it was quite nice in a way, because we're all sort of working together to try and find a common purpose or a common practice,
so I think it's not clear exactly which lots of things are being promised in the future, but there's definitely work that can be done now in terms of trying to get an idea of what we want, and really that is just to serve researchers as best as we can.
Thanks ever so much. I think we have one minute left for another question before we have to go for lunch. I see you. I just want to note that you talk about cultural transition, but you are advocating an institutional transition,
and so I'm a bit amazed about, this is more general, how deeply influenced the British system is nowadays by the REF as a framework, and people are really almost giving up their academic freedom before being hired in the REF.
Deviance is really no longer appreciated. David, do you want to come back on that? I'll run around the front, shall I? Culture. The, the, yeah.
Yeah, yeah. Yes, yeah, so obviously we had, obviously the last REF was last year, but obviously we've had a lot of movement since then from Hefke, and it's all pretty much we're thinking about the next REF. It is a very big part of where universities in the UK are kind of thinking,
and what a big reason why so many of them adopted these kind of systems and trying to put these processes in place because they can see the implications for the income further down the road, even though we don't quite know exactly what shape or form the REF's going to take, or even if it's going to exist in that form, as an article last week sort of demonstrated.
But yes, it's definitely the wrong kind of culture, I think, and it's taking time away from academics and the business of doing research. They're worrying about complying with mandates as opposed to the business of actually doing research. So fundamentally, that's what we're trying to do, is to try and ease that burden as far as we can
in order that it's been not impacting too much on the world-class research that they're doing. Thank you very much, everyone. Can we have a big round of applause for our three speakers? And now everyone else but me can go eat.