Day 2: Highlight presentations
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 8 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/46260 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
2
3
00:00
FlächeninhaltMultiplikationsoperatorNatürliche ZahlDatenfeldWeb logInhalt <Mathematik>App <Programm>Twitter <Softwareplattform>Ordnung <Mathematik>HilfesystemGüte der AnpassungAnalysisEreignishorizont
01:18
Familie <Mathematik>DigitalsignalFokalpunktSoziale SoftwareStrategisches SpielWeb SiteAggregatzustandDreizehnMereologiePackprogrammProgrammbibliothekRFIDGüte der AnpassungAggregatzustandKontextbezogenes SystemDatenverwaltungMultiplikationsoperatorPunktWeb SiteFamilie <Mathematik>HypermediaNatürliche ZahlGoogolFlächeninhaltMaterialisation <Physik>Virtuelle MaschineDifferenteDigitalisierungFokalpunktInformationSchlüsselverwaltungStrategisches SpielWhiteboardVorlesung/Konferenz
06:27
ENUMEinflussgrößeBefehl <Informatik>Metrisches SystemInklusion <Mathematik>VektorpotenzialHypermediaMaß <Mathematik>MenütechnikPhysikalisches SystemRahmenproblemCASE <Informatik>SicherungskopieMultigraphTelekommunikationDifferenteVollständiger VerbandQuellcodeOffice-PaketAutorisierungMetrisches SystemSpieltheorieMultiplikationsoperatorAnalysisNatürliche ZahlStichprobenumfangSchlüsselverwaltungNotepad-ComputerZahlenbereichWeb logProdukt <Mathematik>BenutzerschnittstellenverwaltungssystemDemoszene <Programmierung>MereologieAnalytische MengeMotion CapturingDatenverwaltungVerknüpfungsgliedHypermediaBefehl <Informatik>Projektive EbeneEntscheidungstheorieEinflussgrößeART-NetzTypentheorieRoutingVorlesung/Konferenz
12:59
Metrisches SystemAnalysisObjektverfolgungPeer-to-Peer-NetzWeg <Topologie>Inklusion <Mathematik>DigitalisierungDigital Object IdentifierVorzeichen <Mathematik>EinsBildschirmfensterZählenDifferenteZirkulation <Strömungsmechanik>QuellcodeProgrammbibliothekDivergente ReiheTelekommunikationMengeHilfesystemKontextbezogenes SystemProzess <Informatik>Strategisches SpielAuswahlaxiomPlastikkarteZahlenbereichMereologieWeg <Topologie>ART-NetzInformationDigital Object IdentifierBenutzerbeteiligungMAPKonfiguration <Informatik>Twitter <Softwareplattform>MultiplikationsoperatorURLOffice-PaketPermanenteFokalpunktDiagrammÄquivalenzklasseAnalysisWeb-SeiteGraphfärbungMetadatenWeb SiteShape <Informatik>Rechter WinkelKlasse <Mathematik>Metrisches SystemSichtenkonzeptProdukt <Mathematik>Vorlesung/Konferenz
19:31
Coxeter-GruppeFokalpunktSoftwareWeg <Topologie>HypermediaHill-DifferentialgleichungGesetz <Physik>InformationsmanagementMenütechnikPeer-to-Peer-NetzMetrisches SystemPay-TVInformationLie-GruppeWurzel <Mathematik>MultiplikationsoperatorZahlenbereichCASE <Informatik>SpeicherabzugMatchingMaßerweiterungMetrisches SystemOffice-PaketValiditätOffene MengeMAPBitrateGruppenoperationSystemprogrammTexteditorDigital Object IdentifierPunktFokalpunktBitFrequenzGrundraumBenutzerbeteiligungExogene VariableInformationGenerator <Informatik>Inhalt <Mathematik>Dichte <Stochastik>TelekommunikationQuick-SortAnalysisRechter WinkelDivergente ReiheWeg <Topologie>Dichte <Physik>ComputervirusVorlesung/Konferenz
25:09
Coxeter-GruppeLeistungsbewertungMetrisches SystemBeobachtungsstudieExogene VariableLogiksyntheseQuellcodeAutorisierungTaskMultiplikationsoperatorMetrisches SystemProzess <Informatik>InstantiierungGruppenoperationVerkehrsinformationBitMedizinische InformatikInformationSchätzfunktionDateiformatIdentifizierbarkeitFlächeninhaltHorizontaleOffene MengeLeistungsbewertungInverser LimesRechter WinkelProgrammbibliothekMAPSoftwareschwachstelleProdukt <Mathematik>MereologiePivot-OperationZellularer AutomatGenerator <Informatik>ZweiExpertensystemFramework <Informatik>Vorlesung/Konferenz
27:52
InformationMaß <Mathematik>Modul <Datentyp>Metrisches SystemGruppenkeimWeg <Topologie>GammafunktionLokales MinimumSoftwaretestENUMEntscheidungstheorieExistenzaussageMengeRechter WinkelVorlesung/KonferenzBesprechung/Interview
28:56
ENUMSoftwaretestp-V-DiagrammSystemaufrufTheoremTaskSpannweite <Stochastik>Metrisches SystemE-MailMathematikCASE <Informatik>GruppenoperationBeobachtungsstudieOffene MengeNormalvektorProzess <Informatik>EinflussgrößeVerkehrsinformationTwitter <Softwareplattform>GrenzschichtablösungParametersystemProjektive EbeneAutomatische IndexierungFramework <Informatik>ZahlenbereichKategorie <Mathematik>ResultanteIndexberechnungWort <Informatik>ZählenBruchrechnungEinsMatchingKanalkapazitätSichtenkonzeptRechter WinkelMetrisches SystemMengePrinzip der gleichmäßigen BeschränktheitDifferenteTeilbarkeitMultiplikationsoperatorPeer-to-Peer-NetzGeradeSoftwareMultiplikationArithmetisches MittelNichtlinearer OperatorStrömungsrichtungLeistungsbewertungMailing-ListeAnalysisExogene VariableMereologieValiditätLuenberger-BeobachterStreaming <Kommunikationstechnik>Strategisches SpielAuswahlverfahrenKollaboration <Informatik>Kartesische KoordinatenGrundraumEin-AusgabeSpieltheorieHauptidealAbgeschlossene MengeSchlussregelEntscheidungstheorieTeilmengeAutorisierungPersönliche IdentifikationsnummerMapping <Computergraphik>TaskWeb logQuellcodeProdukt <Mathematik>InformationGebäude <Mathematik>EreignishorizontBildschirmfensterFrequenzgangDynamisches SystemExpertensystemBesprechung/InterviewVorlesung/Konferenz
33:48
ExpertensystemGruppenkeimEingebettetes SystemOffene MengeMittelwertVerkehrsinformationZahlenbereichGruppenoperationExpertensystemResultanteAutorisierungKartesische KoordinatenProdukt <Mathematik>Elektronisches ForumMAPSystemaufrufRückkopplungWeb-SeiteGrenzschichtablösungEindeutigkeitGenerator <Informatik>VerschlingungEntscheidungstheorieURLWeb SiteDifferenteMetrisches SystemRechter WinkelBeobachtungsstudieOffene MengePunktMinimumInformationExogene VariableLokales MinimumMultiplikationsoperatorNeuronales NetzEingebettetes SystemSoftwareentwicklerFeuchteleitungPerfekte GruppeÄußere Algebra eines ModulsInformation RetrievalVorlesung/Konferenz
36:10
RechenwerkMetrisches SystemTaskOffene MengeMathematikKategorie <Mathematik>AssoziativgesetzProdukt <Mathematik>SichtenkonzeptEntscheidungstheorieProzess <Informatik>Reverse EngineeringMultiplikationsoperatorStandardabweichungKanalkapazitätIndexberechnungSoftwareentwicklerPhysikalisches SystemIdentifizierbarkeitMaßerweiterungInformationDateiformatExogene VariableVarietät <Mathematik>Peer-to-Peer-NetzKugelKontextbezogenes SystemMetaanalyseBildschirmmaskeBefehl <Informatik>VektorpotenzialSpiegelung <Mathematik>GruppenoperationMeta-TagAutomatische HandlungsplanungDifferenteFunktion <Mathematik>Abgeschlossene MengePaarvergleichKollaboration <Informatik>ProgrammbibliothekMAPSoundverarbeitungRechter WinkelEinflussgrößeZahlenbereichTermMittelwertSchlüsselverwaltungValiditätWort <Informatik>TeilbarkeitExpertensystemFeuchteleitungIntegralWellenwiderstand <Strömungsmechanik>Mixed RealityGeradeTrennschärfe <Statistik>VerkehrsinformationSystemplattformCoxeter-GruppeKonstanteProjektive EbeneMereologieTwitter <Softwareplattform>PunktAnalytische MengeKategorizitätDatenstrukturFreewaret-TestZustandsmaschineVorlesung/Konferenz
42:05
Metrisches SystemRückkopplungMetrisches SystemMathematikBijektionZahlenbereichOffene MengeVerkehrsinformationEntscheidungstheorieEinflussgrößeTermDatenverwaltungWeb SiteTranslation <Mathematik>Generator <Informatik>SystemplattformWeb-SeiteSpiegelung <Mathematik>SichtenkonzeptPunktMultiplikationsoperatorBeobachtungsstudieRepository <Informatik>MAPRichtungLesen <Datenverarbeitung>EchtzeitsystemProzess <Informatik>BildverstehenProjektive EbeneExpertensystemRückkopplungTaskIndexberechnungGruppenoperationRechter WinkelMapping <Computergraphik>Figurierte ZahlPackprogrammBitKontextbezogenes SystemLeistungsbewertungKontrollstrukturMailing-ListeQuellcodeDifferenteRechenschieberMehrrechnersystemBesprechung/Interview
51:03
RechenschieberMomentenproblemSelbst organisierendes SystemExogene VariableMereologieLeistungsbewertungDatenfeldBesprechung/InterviewVorlesung/Konferenz
52:30
Physikalische TheorieMathematikPhysikalische TheorieProdukt <Mathematik>QuellcodeMAPResultanteExogene VariableEntscheidungstheorieGeradeGenerator <Informatik>MultiplikationZweiMathematikDatenfeldDatenverwaltungLeistungsbewertungRichtungLinearisierungPhysikalisches SystemMereologieTouchscreenInteraktives FernsehenInformationsverarbeitungStrategisches SpielAssoziativgesetzVorlesung/Konferenz
55:17
Ein-AusgabeProzess <Informatik>Verbindungsloser ServerVerschiebungsoperatorDatenbankLuenberger-BeobachterMengeProgrammbibliothekInstantiierungIdentifizierbarkeitCASE <Informatik>BitBeobachtungsstudieInformationVerkehrsinformationBenutzerbeteiligungElektronische PublikationMatchingEntscheidungstheorieQuellcodeGruppenoperationEinflussgrößeTaskInhalt <Mathematik>MultiplikationsoperatorDesign by ContractProdukt <Mathematik>MetadatenAutorisierungEindeutigkeitMereologieInverser LimesDatensatzLogiksyntheseDigital Object IdentifierProzess <Informatik>MAPMailing-ListeValiditätVektorpotenzialCodeSoftwareschwachstelleAnalysisLeistungsbewertungOffene MengeExogene VariableSpeicherabzugTypentheoriePivot-OperationMetrisches SystemURLWeb SiteVideokonferenzVorlesung/Konferenz
01:05:11
MengeCASE <Informatik>Neuronales NetzBildschirmfensterHyperbelverfahrenBeobachtungsstudieZahlenbereichVerkehrsinformationIndexberechnungLokales MinimumProdukt <Mathematik>AutorisierungRelativitätstheorieResultanteGrundraumMittelwertAutomatische IndexierungEinsFramework <Informatik>Befehl <Informatik>MatchingGrenzschichtablösungEindeutigkeitKategorie <Mathematik>Kartesische KoordinatenSchlüsselverwaltungQuellcodeEntscheidungstheorieLuenberger-BeobachterKanalkapazitätEinflussgrößeHauptidealLeistungsbewertungGebäude <Mathematik>MereologieStrömungsrichtungInformationMapping <Computergraphik>VektorpotenzialProzess <Informatik>Nichtlinearer OperatorGemeinsamer SpeicherPersönliche IdentifikationsnummerAnalysisStrategisches SpielStreaming <Kommunikationstechnik>Vorlesung/Konferenz
01:13:28
Lokales MinimumKollaboration <Informatik>Prozess <Informatik>AssoziativgesetzKategorie <Mathematik>Vorlesung/Konferenz
01:14:42
TermInformationOffene MengeMultiplikationsoperatorDatenverwaltungProdukt <Mathematik>MaßerweiterungMittelwertMereologieZahlenbereichMathematikCoxeter-GruppeTaskSoftwareentwicklerWeb SiteKontrollstrukturPunktVorlesung/KonferenzBesprechung/Interview
01:19:14
Multiplikationsoperator
Transkript: Englisch(automatisch erzeugt)
00:05
Good morning friends. Thank you very much for joining us on the second day of the 4 a.m. Altmetrics conference in Toronto As the person who is opening the conference I would like to remind you that I will also be closing the conference and that in order to close the conference properly
00:24
I need your ideas via Twitter in person Hand up or on the app that I linked to from the blog I'm looking forward to seeing what you have to say about where Altmetrics is going to next not geographically but as a discipline as a field
00:42
But without any more further ado I would like to introduce you to Richard Holster of the Natural History Museum of Los Angeles Richard is going to be talking to us about his work the analysis of scholarly content online discussions at a Natural History Museum, so it's a very different area
01:00
Completely unique to this to this event so over to Richard there will be time for questions afterwards Okay help on getting it to the all these techies, but we all have problems doing this
01:32
Okay, great. Good morning. Everyone. How are you? How many librarians do we have in the audience anyway?
01:42
Wow Okay good As Mike was saying I work at the Natural History Museum in Los Angeles And so what I'd like to do is I'd like to talk to you about three things one is the context of which My whole talk is all about because I think it's important to understand the background of where this all comes from
02:06
And then I'd like to talk to two different areas one the museum as a research institution, which a lot of people Don't necessarily think of one a museum as one and then also a museum as publisher We are a scholarly publisher and some of the benefits and challenges of that so
02:24
Starting out with the context the Natural History Museum in Los Angeles as opposed to the one in the UK is a family of museums actually and We have our our large acquisition park site where we have the dinosaurs and all the other good stuff and
02:43
Beautiful gardens, and that's where I have a large library of over 250,000 items and departmental libraries and We also have the La Brea Tar Pits. You may be familiar with the Tar Pits and I Have a small library and archive there And we also have the William S. Hart Museum and William S. Hart. Is this on automatic? I hope not
03:06
William S. Hart is a Western Movie star way before John Wayne and He willed his estate to our Of the county which we were a part of at one point in time as far as management. So anyway, all right
03:24
So there is a lot there are a lot of different departments in the museum and some of which are in the sciences And those get the biggest press and yesterday's talk and during the workshop There was a lot of folks that talked about more of the sciences but we also have history anthropology and archaeology and
03:43
archaeology in particular spans both the sciences as well as the social sciences and whatnot and And we have mineral sciences among others and a lot of these kind of departments Publish in small society publications. They're really critical to their particular areas
04:01
Many don't have do eyes many don't have a lot of things So I think this machine is all on its brain of its own So What's telling me to keep going? Okay, so the other focus for all libraries Museums and archives is to demonstrate that we are 21st century useful that we're not a warehouse of materials
04:25
as Many of you have often heard Libraries do we really need them anymore? Isn't everything on Google? No And and our museum is focused on the digital museum And it's always interesting to hear the executives talk about the digital museum in concept. They get it in
04:44
practicalities maybe and so it's up to us to help them understand that but two of the key focuses are social media presence and a strategy to capture our visitors before during and after Much like in the commercial world, which I did work. I worked at IBM many years
05:02
I worked at Amgen a biotech company among others a global Companies so we have like a hashtag How do you museum and that's a way to engage the visitors and see what's going on? so there is a social media consciousness to them and That's where we can help or
05:21
Take advantage of that and the most recent thing is this as of last Sunday We now have an official state dinosaur this August DeNofilis otherwise known as Augie and it's the state of California state dinosaur I didn't know we were supposed to have a state dinosaur either
05:42
But we have one and the key here is that they're doing a big press on all this but the other aspect of it our senior vice president is a dinosaur guy and So he was heavily involved with this and we do have the dinosaur bones And so there's all this research. So I'm thinking okay research
06:04
social media metrics, you know so I have work to do when I get back so this is a great opportunity to really blend it together and Bring the consciousness of what altmetrics information can do and relate it to something That's really on people like the vice president and the board of trustees in all of their minds. So we'll see how that goes
06:28
So museum as research institution There's a lot of different kinds of museums around the world. Not all museums are research institutions many are jet our educational institutions not just but educational institutions and others but many of us are and
06:46
We happen to be a natural history museum the Getty which is an art museum also does a lot of research They're very interested in the topic. We're all discussing this week among others. So Because they are interested as well as in promoting and understanding how their research is is perceived and talked about
07:04
So research key challenges and this is for us some lot of this you'll be like, yes I know this already But just to emphasize number one the research department visibility and value to the institution is always a big challenge Because what people yourselves when you come visiting you see the exhibits you might see the education people who are helping you understand those exhibits
07:23
But you don't see all of us behind the scenes doing all the work to make that happen and so to make sure that that part of what we do is On the consciousness of the people with the funding with the management decisions
07:40
That's a real important key thing Then they're there in support of that is the perception of the institution's value to everybody around the world Raising the consciousness of who we are and what we do Everyone knows the Smithsonian and everyone knows a lot of other museums around the world We hope that they know ours But this is something important as well
08:00
and What's great about all metrics and other kinds of tools like that is that it provides measurements to backup Statements and reasoning some real facts and you know researchers like that So why altmetrics at our museum on the left? This is a typical agenda of our monthly meetings I just took some samples and so there's two things we do one is we look at the
08:26
newest publications of the month who published last month wherever and Some have do eyes and many don't because of the specialties What was interesting to me? I've been at the museum for a little over seven years is during the meetings
08:42
They also talk about who was interviewed by National Public Radio who was in an online discussion who was talked about in an interview on television or those kinds of things and Really it was it was in the back of my mind or that's interesting and had no idea how to capture that
09:01
But in around 2014 I was looking at well How do we just handle the bibliography side of it not from the researchers side but from the institution side? How do we gather it efficiently effectively and not have even just seven years ago the The assistant typing on a typewriter those bibliographies and worrying about the syntax
09:24
you know, so Just looking at that. So that's why ref works in Sotero and those kinds of products were on I was looking at those and that's when I came upon plum analytics altmetric and Mendeley more and and understand trying to understand what these things were doing in those days and
09:43
And then later on I found out a lot of our scientists were were on research gate. And so I threw that in there just to put that into the consciousness there, so ultimately, we decided on working with the altmetric company and and I'll show you some examples of what worked for us what I
10:03
Tell people all the time is you really need to understand you know, this is my marketing hat from my old days is you look at what do you need and what fits your needs and In our case the altmetric work for us so What this does is it identifies attention to research articles
10:22
Without press releases from from our museum and our communications office was very excited about that when I showed it to them So whatever tool you use the fact that it's surfacing and I'll give you an example in just a few minutes surfacing some of that It also enables the communications office to more easily identify future
10:41
future potential promoters of the research and activities and It also is a realization of attention to research by a lot of different people You can actually identify which used to be very hard to do research journalists We talked about there a lot of people talked about this yesterday news outlets bloggers and other social media users
11:00
And it also provides an easy way to gather attention of researchers work for grants. I asked one of our new postdocs I said, do you know about all metrics? He said? Oh, yes, I use it It makes it easy for me to gather what I need Really quick so I can stick it in a grant proposal and as far as what who's paying attention to my research I said very good
11:22
Keep telling the big boss that now the concerns we have are The use of any metrics as we all talked about yesterday are not necessarily well received by all the researchers You put a number on any kind of research and they flip out It's like I don't do my research to be the most popular. I do my research for XYZ reason of doing the research
11:42
So if you're gonna look at just what's popular and who rises to the top then I'm not for that It doesn't capture all the published research there's so much that is not in traditional peer-reviewed journals necessarily, it doesn't capture all the online discussions of research projects as we know and There's always that question of who's manipulating and gaming things
12:04
All right, so back in 2014 This is what we had just started and we put a few Publications in to the altmetric company and then I looked at that again a few years later same time frame It was interesting to see just by looking at graphs like that
12:22
Just the the difference in activities and the different sources so Now here's here's where it gets interesting for us and and how people use altmetrics is that on the upper left you see And this was in 2015 there was this article that had something like 15 different authors of which our echinoderms curator was one of and
12:45
This rose right to the top And echinoderms are not necessarily the biggest topic on people's consciousness See stars and whatnot, but this had to do with a lot of climate change analysis and things and and also diseases and invasive species so
13:02
That was really popular in those days along with a parasitic fly that Was around and so those were they were holding their owns for a while and then a reptile plesiosaur from the Mesozoic that that was in reasonable shape, but then Just this year
13:20
Well last year. I should say we had hired a new curator and Think about it if you're the new curator, and you're you're doing this wonderful research and You know it's great to have your research being paid attention to it's it's more interesting It's even better that it rises to the top of
13:41
Interest in some using altmetrics and again popular versus whatever, but let's put that aside for a second, but And so and this this kind of information is good for the senior vice president Who's interested to seeing how you know did we make the right choice for the people well clearly? Some of our mission is to be out there with you all the public and make sure that you understand the research
14:03
We're doing and this is a way to prove that so so that that's all pluses Tweeting tends to be the big thing and so What I like to do is I look at not the number of tweets per se But we really look at the number of users, and then the number of subscribers they have that's really what yes
14:26
That's where the numbers are and what I like to say is we're tracking eyeballs, so you look at PLOS well naturally they're gonna have a lot of subscribers, but then you look at somebody like a cinto davila and You know this guy is in Venezuela, and he's got a lot of
14:44
Followers and look at how many tweets 66,000 I don't he has a lot of time on his hands I think but what we're doing is looking at that to see who are these people the communications office the scientists want to know Who's looking at my research? Why are they looking at it? Is that somebody I should know if I don't know them?
15:04
should we be sending new information out to them that kind of Excuse me that kind of thing and and that's really interesting and then Looking at people like our Vice president and his research and just seeing well One of his many articles are being talked about and the interesting thing about the color doughnut or whatever
15:26
Equivalent there is with the other products. It's interesting to see the different Arenas where they're being talked about some are mainly tweeted others have a myriad of different kinds of sources and So that helps the researcher understand what kinds of things they should either be doing more of
15:45
or taking advantage of and The same with the department and looking at like the dinosaur Institute and seeing how how effective they are out in that so That helps us understand gee where where are we going and and what are all the different departments and which departments are
16:07
More broadly looked at from all the different sources versus some that are not and not neither one is bad or good It's just a different way of looking at things So what I'd like to do is turn over to us as scholarly publisher We have a peer-reviewed journal called contributions in science, and we do have some monographic series and others
16:28
We have a wonderful set of PDFs that are freely available on the publications website, but they have no do eyes so they're getting access to Google Scholar and research that way but
16:41
And we can get Google Analytics, but that's about it so We spent a lot of effort and the biodiversity Heritage Library of which we are an affiliate member we up we digitized and uploaded that journal this year with the help of the Smithsonian libraries and others and Did a lot of article level metadata?
17:03
manual entry so that we could track it and it's got great potential for altmetric tracking and There's a couple of options we're looking at one is the holy grail of using a URI or URL a permanent URL for articles without a DOI or
17:21
assigning the DOI either by ourselves or through the biodiversity Heritage Library who is looking at that and There's pros and cons to all of that and we can talk later about that But you can see on this diagram on the top. There's there's a couple of different ways You can look at the permanent URLs in the BHL
17:41
They have a the the at the title level but they also have it at the what they call the item level or The equivalent of the article level and then at the page level and it's which ones do you use? How do you track them? How do you conglomerate them? It's an interesting challenge and so the benefits we see to this for us is that from the institution side the altmetrics aligns with our
18:06
Institutional focus of quantitative and qualitative analyses because it does provide a window into The counts just like circulation counts in a library who comes in You don't know which one of those people wrote that amazing book or had a got a Nobel Prize
18:21
Whereas you had ten other people who are just there for their homework for their latest English class, right? It also augments marketing and communications analysis tools and useful for any kind of research And those that serve researchers and for we as librarians in particular and others It shows our awareness of new tools and also a usefulness to the mission of the department institution
18:44
Because I will tell you our jobs are also looked at as well You maintain books or you maintain physical things that we don't need that and no we're doing all this other stuff And so this is what we do. So for today and tomorrow, I'll just say that
19:02
It's still new and morphing even though it's about what seven years old or so Librarians are key players in this and our active participants in information-wide strategies and I'm gonna leave you with a challenge. That's really for tomorrow's do-a-thon and all of you wonderful intelligence smart people
19:21
I did a part a paper at the Valla Conference, which is a technical library conference a year and a half ago in Melbourne, Australia And it's a peer-reviewed paper and it's up on the web But the it doesn't have a DOI but it's available but a lot of you are quoted in it I must tell you so so it'd be great to figure out how to track
19:44
Access to that and the altmetrics on that, but I don't know so I'll leave that challenge to you and I thank you very much We'll take some questions
20:02
The minutes before so we can take two or three questions If there are any oh there we go, we've got to got three questions I'll probably do do for us. Should we start in the in the back there, please?
20:24
Yeah, there we go um I was interested with what you said about the communications office and how they're using the all metrics tool because I Actually had the exact opposite experience when we got in the all metrics tool, which we use at our University I showed it to their communications office and they said oh, but we're interested
20:41
We have to be the first out so we're not interested in tracking things once they've been out We just our goal is to get out there first before anything else Did you encounter that problem or were they was it the opposite because it was very interesting some of the points you made about Knowing who for example might be promoting your research to approach them in the future
21:02
Did you encounter were they positive about it? Did you have to convince them or did they go? Whoa, this is great We also had some a little bit of that because they do you know Their focus is getting out there first or getting out there big You know the idea with the museum, you know, it's a quasi competitive situation, right?
21:20
Because we're all looking for funding and stuff So so they're looking at that But what they told me they were excited right away which was very nice to me because I was kind of surprised honestly because I thought I would get what you're talking about and what they were saying was we didn't know, you know that dense O virus article I was telling you about the sea stars they had no idea about how
21:41
Strong and attention it was and there was also another one which I didn't talk about in that big group that they said You know We don't have a press release on that We need to get one because because what happens is they rely on the scientists to tell them I just published something or I'm about to publish something. Here's the information So if you rely on the scientists to do that But scientists are busy doing the science or the researcher is doing the research whether a history or archaeology and stuff
22:05
They're not really necessarily focused on marketing it and so The communications office has relied on the scientists. And so this helps them do that They have a something else that they happen to license something called extensis, which is one of those tools that do the broad
22:22
Data dump kind of thing which overwhelms them with data daily And what they liked about this was it much much more focused to a fault We all know but but it does help them focus and see something. So For me, at least I didn't quite get the reaction you did We had two more hands and we're gonna have to be quite quick if we're gonna take both questions
22:44
Do we still have two hands or do we have in the back? Okay. Thank you very much. Thank you So I was struck by the number of times you said we don't have a DOI and I was I'm interested in
23:03
Why you don't Get one well, you know, that's a great point and Our publishing editor is open to it It's not the most prominent thing on her mind But hope I'm gonna talk with the cross our people tomorrow about that
23:24
I think the utility of it going forward she understands that As she said to me last week She doesn't see a reasoning to do it for all the past Items, I disagree with that except for the one that you've already pointed out that that would be helpful. Yeah, exactly
23:42
So so it's it's this happens to be right today the transition I think in the next month or two, we'll solve that and get that done because it makes sense to do it It's just they're really great rates for small people. Just like you. I'm just gonna I don't even work for cross-ref Well, I agree
24:00
Sometimes it takes somebody with the initiative and the interest to make just make it happen and we're all overwhelmed with all sorts of things But I agree with you, but on the other hand, we just need to find there's so many other places There's so many publications that don't have them and won't anytime soon. We need to figure out how do we get? Tracking on those because they're not gonna have it as easy as I necessarily will. Well, thank you very much, Richard
24:24
Thank you. Thank you Our next two speakers are Yudit Baralan and Isabella Petters and they're going to be talking about the next generation of altmetrics Open open science responsible metrics. It's a series of subjects, which is very close to my heart
24:44
So I'm going to be listening very closely and I will be asking questions later So, please big hand for Yudit and Isabella
25:39
so good morning and
25:41
We are going to speak about Next generation of metrics responsible metrics and evaluation for open science actually, this is not our title, but it was a title given to a expert group in
26:00
2015 by the European Commission as part of their open science aid agenda so you can see who were the members of the group and We worked for About a year a little bit more and handed in a report and we're going to
26:23
Report to you about the findings our findings so first you have to understand what were what the European Union asked us to do To Assess the role of metrics and altmetrics in research evaluation
26:43
With an emphasis on open science. This was the central agenda Engaged stakeholders, which we did and Isabella will report on that and considered implication format of metrics both positive and negative and
27:02
explore how altmetrics For impacts research actions Impacts research and in how can it will be implemented already for Horizon 2020 and especially for the next framework program, so
27:26
We met I think six six times or five. I don't remember Why?
27:41
Okay, it's good that you tell Okay, so So now you see the people actually which you couldn't before
28:02
you should have told me earlier and then Now now I can't It doesn't go down to the next one
28:34
So Yeah, something's not synchronized
29:17
So just a few words about the workflow we're synchronized now I see
29:25
First of all, we had You're supposed to decide what matters and and how to measure it and if the end check if there are Available indicators for the task if not develop new indicators test the indicators and if they are not measuring what you really want to measure
29:47
revise them and Things that you have to take into account our validity and reliable reliability of the measures and also
30:00
Keep in mind that measurements influence the measured processes. So if Everyone knows that the number of tweets the publication get is a parameter and promotion then they will make more effort to have more more tweets or anything else and
30:27
one more thing is that Saying attributed to Albert Einstein Not everything that can be counted counts and not everything that counts can be counted
30:40
Now, let's say a few words about traditional metrics well, you know that they are based on citation and publication counts and they are not sufficient for Assessing the Influence or and the visibility of of the
31:01
Research if they take time to us accumulate There are several other problems with the impact factor also with the age index there are disciplinary differences in publication and citation culture and Of course, they don't take into account societal impact at all
31:26
However, this doesn't mean that we have to throw them out altogether And even for open science there might besides citation counts and publication counts which are Important for open science as well
31:42
You can have other indicators traditional metrics that can be useful maybe like when if you want to measure citation advantage of open access applications Yes, or no, I'm Collaboration in open science project these these have
32:03
traditional metrics and usage which is something between traditional metrics and altmetrics I would say like downloads and views and also reads which is more an altmetric already Now, let's go to the advantages of altmetrics
32:22
increase visibility and Expanding our view of what impact looks like exposing research to the public involving the public discussions commenting and Including non-traditional sources. This is important like blogs data software and tools they can all be measured by
32:46
metric indicators and There are events and these can these events can be measured and counted and And these occur fast, this is one of the main advantage of altmetrics
33:04
But altmetrics has as you know also challenges with coverage transparency validity dynamics disciplinary differences gaming and you'll see that gaming was an important for the people the stakeholders
33:21
and Acceptance both by the research community and we heard this before and also by decision-making Intermediate conclusion is that altmetrics bibliometrics and peer review should all be involved in assessing science Whether it's open or closed and now I turn over to Isabella
33:45
Yeah, thank you So now let's have a look at the results of the expert groups and I want to report on the the famous report We write we wrote But first let me tell you something about how we develop the recommendations We had several hearings with experts. We ourselves were you know called the six experts of this group
34:07
we also had a call for evidence and We also asked the people from the 3 a.m. Conference to give feedback which we also incorporated in the report So that's why it's only consequent to report back on you on the 4 a.m. Conference
34:21
You can find the recommendations or the whole report which is about 25 pages long And at this website tiny URL, I use tiny URL not the bitly link which is common here and but Think about next generation metrics and then you will find it
34:41
So the Recommendations we formulated had to be grouped Were grouped under five headline findings and then we had 12 targeted recommendations that are organized under the major ideas of the European open science agenda and what they want to do is that they want to foster open science
35:04
They want to remove barriers to open science They want to develop research infrastructures for open science and they want to embed open science in society And for all that they want to use metrics as a tool and we had to come up with the recommendations how to do that, basically
35:20
So the the headlines find and the headline findings we had as the group are the following three There are no perfect metrics Neither alternative not traditional metrics can be considered. Perfect. We have to keep that in mind Responsible use of metrics is key and open science requires open metrics. This is an outcome
35:43
We had from the 3m conference and retriever very concerned of and which we really try to bring into the mind of the European Commission So everything else what's not coming can be kind of subsumed under these major recommendations and I want to show you only
36:02
some recommendations We've given because I think it's otherwise it would be too too boring for you to just listen to all the recommendations So, please feel free to read this in the report So let's start with the with a selection In line with the principles of responsible use of metrics an open science system should be grounded in a mix of expert judgment
36:27
quantitative and qualitative measures We need to know which forms of assessment are useful in what? Contacts which metrics are robust enough and who should count as peer and who and how we handle peer review and open science
36:43
Hence providing guidelines for responsible metrics in support of open science is seen as a short-term goal of the open science Agenda that in the long run will foster open science. That is what we hope at least The the next recommendation is That we should make better use of existing metrics for open science
37:03
Although many metrics do not reflect transition towards open science so far. We know that Metrics for example have the potential to complement more traditional impact assessment as student already told us We know that they are fast and reflect impact almost instantly They are broad and do not reflect impact on other scholars, but also on other audiences
37:24
They are diverse since they can be applied to various forms of research outputs and they are Multifaceted since they reflect different forms of engagement with those research products so you can count retweeting sharing everything Basically, however
37:40
There are also traditional metrics that can be exploited for measuring engagement with open science For example co-authorship metrics that measure collaboration The short-term goal must be to assess suitability of indicators for open science and to encourage development of new appropriate indicators in a responsible fashion a
38:00
comparison of closed and open science may support decisions Suitable old alt and new metrics should go hand in hand They do not substitute each other But complement each other and they will give a holistic view on all the research products and the impact research products have the next recommendation is
38:26
Open transparent and linked data infrastructures for metrics and open science and we formulated this to achieve The the Responsible use of metrics and
38:41
We think that infrastructures are needed because metrics should be underpinned by an open transparent and linked data infrastructure and Standards are key in this regard. We need standardized and open data collection processes standardized indicators standardized identifiers and standardized data formats to make assessment comprehensible and complete
39:03
Right now altmetrics suffer from a bias towards research products with VOIs other identifiers or products without Identifiers are still not counted There's also a great variety in what platforms are crawled to gain altmetrics and what forms of engagement are counted Moreover platforms such as Twitter only provide part of the data and we do not know what part of data that is
39:27
Here transparency and interoperability on all levels support constant change when thinking further Openness as in open science should also entail free use
39:40
But standards alone do not guarantee adoption Providing the technical infrastructure is a first step but a systemic change is needed as well One of the most significant obstacles to the goals of open science lies in the incentive structure of economic research Which often fails to recognize value and reward efforts to open up the scientific process?
40:03
Researchers may even hamper than their careers when they do open science hence the short-term goals are to use open metrics and to reward adoption of open science principles and practices and We hope to remove barriers to open science by doing so In light of open science its intended benefits and the category
40:24
Categorical change of the research system it requires the last recommendation is the most important for me measure what matters Too often metrics are developed because something can be measured rather than finding other ways to account for the actual effects at work
40:40
Do not mistake the map for the territory. There may be a number of impacts that cannot be reflected by a metric in Open science now we should reverse engineer Start with what is most important with what the users need and what European societies value most availability of data should not dictate our decisions and the development of metrics
41:05
The short-term goal here is to highlight how inappropriate use of indicators can impede open science inappropriate metrics can counteract the goals of open science For example by reducing the diversity of research because of the tendency of journals to only publish mainstream science or by creating
41:23
incentives that severely increase problems of research quality integrity and reproducibility Metrics need to be developed carefully and with such effects of mind responsible metrics that follow those principles can serve as incentives for and as indicators of engagement for open science as
41:43
Such they may remove barriers to open science since they increase trust in science and the scientific system Which is an important factor for both the supply side and the demand side of science and you recognize maybe the words of the European Commission which we also try to incorporate in this report so that they recognize what we are we're talking about
42:03
The final recommendation we want to give is the following Next generation metrics whether it's old old new everything should better measure reward and create incentives for open science We have shown that altmetrics are suitable for certain purposes also in open science
42:24
But they are definitely not ready for a one-to-one translation and the same holds for traditional metrics and everything what we've seen so far Thus we need evidence first about which metrics support open science and which measure what matters most
42:41
Research pilots practical examples are needed to make informed instead of reckless decisions We So, yeah, please feel invited to read more in the in the actual report and We discussed this report on some of the discussions already with two people at the open science conference
43:02
one of them was professor Stefan Hornbostel, he is a professor for the sociology of science and he took a closer look at the new data which comes in with altmetrics and when you use altmetrics for research evaluation and He tried to find out whether you know We face old problems or whether we face new concepts concepts and you can see a little bit of an overview here
43:26
Here what conclusions he drove out of that, but I think the major Idea he had and that is something we haven't really discussed yet also in altmetrics research Is this very interesting aspect?
43:41
He asked us whether we encounter new problems when working with new data and he came to the conclusion that yes since altmetrics do not only count and assess they also Help us to decide what to forget because an archive also has the the task to forget something So we have to decide what we do not need
44:01
in the future and I think that is something we should also have in mind that we that we Only have to use altmetrics for research assessment and to find what is most valuable But also maybe that what can be forgotten in the end The other feedback we got was from Benedict Fecher who is an open science advocate or open science evangelist
44:25
And he said that impact scores should reflect research practice They should benefit the community and they should be able to tell attention and impact apart Which I think is also a nice summary of our recommendations He only needed some lines for that we had 25 pages to recommend that for you
44:45
So I think I will stop here Please ask us questions. Now, please engage in the discussion Especially online so that also the European Commission and the people that are working for this open science policy platform
45:02
Notice you and your thoughts on that Because we are not we our work is finished. Now. We cannot really report anything More to the to the people that are the decision makers. So please comment on that try to Yeah Try to express what you think is missing in the report or what is good in the report
45:25
So that we have a lot of publicity that the so that the decision makers really can make good decisions that they're also You know reflecting what you think about that? Thank you
45:42
Thank you, we don't have very much time for questions, but if there is perhaps one or two from the audience we can deal them You And you and ad from altmetric That was really good So I'm obviously is quite an objective study and a lot of the recommendations seem like they're true all the time
46:06
I'm just wondering from a kind of subjective point of view if you were asked to do this again in five years Is there anything where you think like oh, I really hope that I can change that recommendation or add this news Like is there anything that you really wish subjectively again the all metrics community was doing to push in a particular direction
46:25
Do you want to start first usually I should not answer these kind of questions Well in five-year times things will definitely be different so there will be different recommendations Well, these are quite high level recommendations. So some of them
46:45
Probably will be okay in five years and others will have to be replaced. This is my opinion Maybe we would even change something right now
47:02
Yeah, I would definitely do something different I tried that all the time the recommendations we had no compromise I Learned a lot by drafting these kind of reports. I learned a lot about the processes these decision makers Apply to come to their ideas
47:22
The thing is now that we gave this report to another group they will read this report they will try to interpret Something out of it and then they will do another recommendations and then somewhere at the end somewhere in five years We get the real recommendations or what the European Commission really thinks
47:44
Yeah, okay. I'll thank okay. We have one more question, but it will need to be reasonably brief Thank you very much, but you didn't in my opinion you didn't really say what we should value so what do you think?
48:00
Should we value quality of research innovation impact attention that source Yeah Indeed, I think this the recommendations also should help us Discuss these things again think about it again. I mean open science is still science. It's not nothing different
48:21
So what we value in science should also be applicable to open science, but then we have to think about what's nice about the openness What what do we want from the openness and right now? We are just rushing in the direction. We think that open science May only have benefits, but we do not really know that and I can remember last time at the 3am conference
48:43
We try to convince people from the European Commission who were in the room We need more time we need more time to understand what's going on with our metrics what we really want what they can measure actually But ends with open science and with open science. Yeah, and we have to find out how they fit together
49:02
But yeah, because this is the next you know big idea There's no real time. We really hope that So we try to do the responsible way giving the right recommendations, and we hope that they will do the same with that Yeah, but we should all think about what we really want to measure in the end
49:22
That's a difficult question and it applies to different it depends on who wants to measure What should be measured and what discipline we want to measure? But you cannot come I mean you cannot start measuring by not answering these questions You didn't answer the question What do you think we should measure as one of the six experts on this panel?
49:40
But it depends if you want to assess an individual or an institution or a project Or the whole world you need different Indicators for that and depends on the discipline and maybe others others think as well so
50:00
we should have a large arsenal of possible indicators and and the decision makers or whatever who are certain to the assessment have to have to decide and They should understand these metrics which metrics are suitable for What which case and be transparent about what you use your measure so that the others that those that are measured
50:26
You know what measurements you know apply to them Thank you. Thank you very much indeed, so we're going to move on to our final final speaker today Chris Manuel
50:41
from the Canadian Institutes of Health Research and Chris is going to be talking to us about Using our metrics to understand the research landscape Not the not the first time we've been talking about maps and territories and landscapes today as soon as we get the slides queued up
51:29
By the way if my chair collapses you you may be treated to a an impromptu comic moment it yeah
51:43
It's almost worth doing right almost worth doing Yeah, I'll see if I get my legs over over the top and hit the wall if I want to go over So hello My name is Christopher Manuel I'm a corporate performance analyst at CIHR and I'm going to speak to you today about a large part of what my
52:03
Responsibility is at the organization that I work for CIHR is one of the three federal funders of Graduate research in Canada, and we focus on health We distribute about a billion dollars a year through
52:20
grants and awards both open and strategic I started working at CIHR in 2010 I was new to the field of evaluation and Fortunately, I had a new manager new to the field of evaluation as well So they had their normal teams and he tasked me with finding
52:41
potentially new ways of measuring The outcomes of the research that we supported and what I'm presenting is not the only thing we do We have multiple lines of evidence and like an evaluation you need to triangulate Your findings to to make him stronger
53:01
But my particular responsibility was finding some way to connect our dollars to observable outcomes beyond the immediate Outcome level. All right, so This was developed in about 2012 this theory of change
53:20
But it was really what was guiding me from the onset and it's a very direct linear kind of assumption and it's a very simple theory, but I am cognizant of the fact that there's a lot of interaction and things happening at different stages, but If I made it more realistic, it would not fit on the screen. So this is the simplified theory
53:44
Basically the idea idea is is that we provide the funds to researchers the researchers go out and conduct research They disseminate the results of their research through knowledge products. All right, traditionally in health and medicine. It's journal articles
54:01
They are contractually obligated to acknowledge CI char in any publication that stems from CI char supported research it's part of the funding agreement that they sign at the onset of a grant or award all right, so That's why in the third step I have that CI char is acknowledged as a source of funding
54:25
the initial impact of that knowledge product is usually within academia and that's where traditional bibliometrics come in and Then the knowledge product then has an influence beyond academia With an influence on decision-making and then the second generation decision-making
54:45
Documents then influence behavior and then that influence on behavior Hopefully results in some kind of improvement in health health systems or society My relationship with altmetrics comes in the fourth step and it's not the traditional altmetric
55:04
product that my association with the company has been but with their ability to change their search strategies and customize how things are looked for So I'm sorry for other companies here, but we use web of science as our source of information
55:24
That's because in 2008 we started our open access policy and in 2008 Web of Science actually tracked started tracking funding acknowledgement information. So that's how we select our core set of Knowledge products that are stemming from CI HR supported research
55:42
We know that there's lots of other publications that will be coming out in other journals that are not covered in web of science We also are cognizant of the fact that there's under acknowledgement and lack of compliance with our policy, but at least we know that these 76 77 thousand publications published since 2008
56:03
Have some type of CI HR funding because the author on the paper has said that CI HR funding went towards it They could have been over acknowledging as well, but we have to assume That this is correct We also append end of grant records to it as well because we ask researchers to list publications or knowledge products stemming from
56:24
The specific grants in their end of grant awards and we're in the process of harmonizing those records with web of science records So there's no duplication The next step and that's all that first bit was really how we measure influence within academia using largely traditional methods
56:43
We also then measure influence beyond academia by systematically harvesting downstream documents We do this is because not all of our publications have do eyes or else and a lot of government publications Don't include any kind of identifier in the reference. I know in the evaluation reports
57:03
I participated in within CI HR. There is no DOI or metadata behind the the evaluation report, so There's been three runs of this. The first run was with the cooperation of AHRQ which is the Agency for Health Research and Quality in the United States
57:23
They they piloted the approach looking at guidelines in the national guideline library they maintain we had Well, we couldn't afford continuing doing it that way and we also didn't like the fact that it was the exact matching
57:41
So we contacted eventually altmetrics and this is where our relationship basically happened And I will describe the process but largely it's we know that these at that time 40,000 knowledge products have some CI HR funding And like in bibliometrics. We want to have see an observable
58:01
Evidence of it being used. So we asked altmetrics to search through a database of all Health Canada documents and Public Health Agency of Canada documents and Slough of other health technology assessments cadet documents and guidelines and other things to find any instance where there may be a
58:24
match where our research was used by the authors of those downstream documents and then we also Recently well in the last two years we've done it twice used the exact same approach but with patents looking for any instance where patent
58:43
References CI HR supported research as a source of evidence to support the intellectual idea that there were applying for so I'm not spending a lot of time on the the traditional bibliometrics we do
59:02
Do do it we subscribe to insights and we also almost always contract out bibliometric studies But part of the agreement of a lot of the studies recently has that they share the unique identifiers of the publications They include in the publication set because that unique identifier is key in finding out whether or not there's been
59:24
An observable influence beyond academia so we can complement their reports with our beyond academia data so when altmetric did the matching process they Delivered a an excel file with potential matches alright
59:40
so each of the matches had to be manually validated, so It seems like a gargantuan task, and it is a bit of a gargantuan task But it's so it's there's so much value in it because you start understanding how research is used Beyond academia in a way, and how it's not just this one piece of evidence not
01:00:00
the smoking gun that's going to result in the societal impact, but it's the accumulation of a lot of evidence and sometimes disparate evidence. The evidence was created, that knowledge was created for this purpose and it was repurposed by this other entity to support this decision making document. Alright, so the manual review was long,
01:00:22
but it also allowed us to do a bit of content analysis. So when there was a valid match found, and that was largely you open up the PDF, you search for the title, you find the title, it's valid, then you have to go through the document to find out where and when the information was used in the document and apply a
01:00:42
strength of influence score. The strength of influence score was taken largely from AHRQ and how they did it and it was three levels of influence. You have a weak level, a moderate level and a strong level. A weak level is where yes, you have a valid match, but that match was not really used in the body of the text,
01:01:05
it was merely for further information, please see, or in the case of health technology assessments and clinical guidelines, a lot of the research that is cited in there is listed as excluded studies, alright, because of wrong study design, wrong population, wrong outcomes, but they did
01:01:23
look at that research, alright, and they did have, they did reference it. So that's all kind of the weak level of influence. The second is moderate and that's where you find in the body of the text of the downstream document a reference to the KP. So somewhere they use the information from
01:01:45
that knowledge product in the body of the document. And then the strong is merely where that reference is found in the recommendation of the clinical guideline or the final conclusions of the document or the authors themselves say that a pivotal piece of evidence was
01:02:02
the knowledge product. There's instances where there's only one or two sources of evidence used in a rapid synthesis or a health technology assessment because there was no other evidence. Or because of the process, you're actually counting how many times it's being used within the document. You can see that on average it's only once or twice that
01:02:25
a KP is referenced in a decision-making or downstream document. And in some instances it's 30, 40, 50 times it's referenced within the downstream document. So the assumption we make is that if you reference 30, 40, 50 times, it seems like the author is constantly
01:02:42
going back to that as a source of evidence. So that's also a strong influence on that document. We attempted to, I attempted to try to also code evidence use but that was something we surrendered doing because it was too difficult of a task to really estimate
01:03:02
how the evidence was used without actually talking to the authors and it was taking too much time. And then finally within the Excel workbook, we collected a lot of other data related to the valid matched documents including the full title, author, country, the release year of the downstream document, the area, the implicated institute, and a
01:03:25
few other information. And all of that was linked to the unique identifier, right, that I already talked about. There's a lot of limitations. The biggest is that we don't have an exhaustive library of downstream documents. Right now we have every federal
01:03:46
department of Canada, all of their publicly available documents. That's from Via Rail, RCMP, Health Canada, Senate, House of Commons. It's the full set and that's about this year's, it's close to 200,000 documents. And then we have another 50 or 60,000 documents
01:04:06
that we've pulled from provincial health ministries, CADF. We also went beyond Canada. We went to Medicaid, Medicare, their health policies. We also noticed that all of the
01:04:25
private insurance companies in the United States, each of their health policies must include the evidence supporting their decisions on what they're going to cover and not cover. So we harvested a lot of their policies from Aetna and various Blue Shield, Blue Cross affiliates.
01:04:43
And this is really not the full influence of the research we supported. This is only the influence of the research that we can observe to have had influence, right? There's a lot of influence happening where there are members of committees or there are members of working groups. We can't capture it using this approach. That's where we need to use case studies
01:05:04
and more in-depth interviews and things like that and that's not part of my responsibility. Anyways, there's a long list. I'm going to skip how we create the custom publication sets. I'm going to describe a few of the ways how we're using the OIBA data. OIBA is just a way
01:05:22
for me to describe observable influence within and beyond academia. For performance measurement, most of the performance measurement strategies have some kind of data being pulled from this stream of evidence. And most often, it is the percentage of supported research with an observable influence beyond academia. We also pull exemplars of impact or
01:05:48
influence stories from this that I'll show an example of later. We have new parliamentary reporting requirements. And of the, I believe we have 11 indicators. Five or six of these
01:06:04
are OIBA related. We have the traditional normalized citation score. We have specialization index. But then we also have percentage of health-related federal government documents observably influenced by CIHR-supported research. So basically,
01:06:23
that means we went through each of the Health Canada and PHAC documents, confirmed which ones actually included references, used that as a denominator, and then calculated the percentage of documents. We also have, as an indicator in our departmental results framework,
01:06:42
percentage of CIHR-supported research that has observably influenced a patent via the matching of the ascension numbers. And also percentage of CIHR-supported research compliant with OA policy. This is something that the actual Treasury Board was very interested in. They really wanted to know the phrase was, I believe, how many of those publications can
01:07:05
the average Canadian go on to Google and get? All right. So we had to come up with a number of questions in Institute reviews and in evaluations. And the current evaluation
01:07:21
that's using this data is the operating support evaluation. It has multiple lines of evidence, including bibliometrics, which is being done by a private entity. And they're sharing with us the ascension numbers within the publication sets. And we're using the data also to create
01:07:42
case study candidates for in-depth interviews and research to show how research has gone from our funding to the university and beyond. Some of the indicators are largely grouped within the
01:08:01
CAS framework for impact, which has several impact categories. Some of these I have listed here. So within capacity building, I'll just put up all the numbers right away. So capacity building is largely about improving the ability to conduct research or absorb research. Part of
01:08:24
the process of what we do is we download all of the information related to the KP's, knowledge products that acknowledge CIHR, and then we map them to all of our funding. We identify which of the authors are CIHR supported, and then we get their funding history, and then
01:08:43
we associate the funding history based on windows of support. We tried using funding reference numbers or application numbers to associate to the publication, but only about 40% of the publications actually included, acknowledgements included the numbers. They just merely referenced
01:09:04
CIHR, and as we dived into it and delved into it, it became more and more apparent that, yes, we could try to say one grant or one award went to supporting this research, but there was a large number of grants and awards associated with the authors on those papers,
01:09:25
and that's because we weren't restricting the analysis or coding to just the nominated principal investigator. We're looking at any role that could ostensibly be using the funds by CIHR, so we're looking at the principal investigator and the co-applicants as well.
01:09:42
So by using this mapping data, we could tell that there were about 16,000 unique pins. That's personal identified numbers. There's an average of two per knowledge product. There was 10 unique application numbers associated with it using the common purse of funding,
01:10:02
and then because we've mapped it to all the administrative data, we know the sex of the authors that are CIHR. This is CIHR centric. We don't know anything about any of the other authors, and we also could get the career stage, right? So 20% of the titles had at least one early career researcher listed as an author. Advancing knowledge is just traditional bibliometrics.
01:10:29
Then we had informing decision making. There you can see that we had just under 3,000 unique titles of downstream documents influenced by CIHR.
01:10:43
Percentage of supported titles, so that's the knowledge products, 15.5% of the publications that were released between 2008 and 2010 had an observable influence beyond academia. That's going to grow as we run successive studies,
01:11:00
largely because of lag, right, and also lag not only between downstream documents, but patent approval lag, all right? The KT gap is ridiculously low, all right? Five years, whereas studies say 17 years. There's a lot of things working to affect that. We have an
01:11:23
artificial start date of 2008. We can't go before 2008, so when we look at CIHR publications, there's this artificial start date. I think that the lag from funding to influence will grow as time grows, because we're only looking at the CIHR publications. So there's a maximum
01:11:43
currently of nine years that could be there, right? So as we continue to collect this information, the lag from funding to influence will probably increase. There on the bottom, 19% of Health Canada and PHAC documents were influenced in some way by
01:12:03
CIHR-supported research. And then patents on the other side. What was interesting, though, was by doing the validation, sorry, I'll hurry up, you could see the role of some key players or papers in the world, all right? And I would say
01:12:25
the most impactful one was associated, papers associated with the PRISMA statement. PRISMA statement was a methodology for doing meta-analyses, and it is referenced as a source of methodology in papers, I mean in downstream documents from around the world, all right?
01:12:44
It is the go-to methodology that they reference in how they're doing their meta-analyses. We also found, as we were collecting the downstream documents, some very important things that gave us ideas about potential societal impacts. So the CDR that I put there is the common drug
01:13:04
review in Canada, all right? So we'll skip that and move on. But basically, we're showing how evidence created by CIHR-supported researchers are being used in the recommendations on drugs, and those recommendations are used by provincial formularies in forming their decision on what
01:13:25
drugs will be included in the provincial formulary. Lots of interesting insights, I'm going to skip them. I'm going to skip our planned activities, except that we're now scaling it up, the LIBOR approach with Genome Canada. We're trying to implement the same process
01:13:43
with them, in collaboration with them. And this was an example of a story that we could develop. Largely, the first top is the process of how research influences,
01:14:00
has an impact, and it's a Banting Fellowship Award recipient. She published lots of publications, and subsequently, those publications were used by a handful of Australian medical agencies or associations related to asthma. And also, the American Pregnancy Association used
01:14:23
some of her research in one of their publications. This here is a fact sheet that was developed for papers that were acknowledging both CIHR and Health Canada in PHAC, and it was divided among the categories of impact, capacity, and we're done. Okay, thank you very much.
01:14:49
We don't have any time for questions, but we're going to take a couple of questions because I can see that we've got people hopping up and down. It's an overwhelming amount of information.
01:15:08
I'm wondering to what extent CIHR is going to comply with its own open access policy, and share some of this with some of the other institutions who are funded and can make use of it. So, I'm a small cog. It is my idea and my project, but we are talking with
01:15:29
NAFRIL, because I know I've done an unofficial pilot for MSFHR in British Columbia, and they really liked the information, but they can't use it. So, I think that the idea is to
01:15:45
try to see if we could collaborate with the provincial funders, because part of the gargantuan task is the harvesting of downstream documents. So, if we could facilitate that by having the different provincial funders also collect the documents, I think that's the first
01:16:05
step that we envisage, that it becomes an approach that the provincial funders use, and then who knows where it goes. But, small cog. Andrea, thank you.
01:16:24
Hi, I'm Andrea Mihalik from Plum Analytics. Fascinating talk, and a great example for other funders of what really the kind of data, if you put a lot of time and effort, what you could collect. The one data point you gave that I'm really interested in was talking
01:16:40
about the time of knowledge product creation to measured impact. And did I hear you right that you said you thought it would increase that amount of time? It will increase over time. All right, so right now that is an average of all of the funding we associated with that knowledge product, which is about two and a half years to publication, and then about two and a half years to influence on average. So, that's five years.
01:17:05
But, because we started in, we can't go earlier than 2008. We don't know what CIHR, which CIHR publications are pre-2008. So, in those guidelines or in the Health Canada policy document, there might be other CIHR publications that were published before 2008,
01:17:23
but we don't know that. Okay, great. So, you're talking about, once you analyze your historic data, that goes back, not the things you're funding now, you expect it to take longer to get to? No, no, no, no, no, no. But as we continue to collect the data. I wanted to make sure I didn't miss here, because I was like, wait, that seems like the opposite. No, no, no, no, no. But as we collect more and more data, then we'll see the 2008
01:17:43
publications being referenced in more recent documents being released, and it will be more accurate. Thank you. Thank you. We have one more question in the middle here. Hi, thank you very much. Your presentation was great. I just wanted to ask, with the,
01:18:05
I guess, no longer recent change in government and science funding styles, I was wondering if you've seen a change in the style of dissemination and the quantity of research dissemination being done by researchers themselves? There's always been an increase. There was a dip in numbers in 2014-15.
01:18:26
And that's, you can see on websites if you go onto it, right? In terms of impact and influence, it's too early to say. But I know that I was always supported well by management in the development of this. But in terms of open science,
01:18:46
I don't know if that helps as well. This was not supposed to be a trick question, I'm sorry. Open science has been slowly increasing, all right, in terms of compliance with open access. So I don't think I can answer that. I'm sorry. That's okay. Remember, I'm not policy. I'm a tiny corporate analyst,
01:19:06
and this is what I do. It's very exciting. Well, thank you very much indeed. And we're going to close this session now. We're a few minutes late, so I think we'll probably be trying to have a slightly truncated coffee break. So see you back here in a few minutes' time. Bye.