7th HLF – Panel Discussion: The Future of Scientific Publishing
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 24 | |
Autor | ||
Lizenz | Keine Open-Access-Lizenz: Es gilt deutsches Urheberrecht. Der Film darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. | |
Identifikatoren | 10.5446/44104 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
19
00:00
Elektronisches ForumOffene MengeResultanteGruppenoperationMereologieMathematikerProjektive EbeneAdditionElektronische BibliothekOrdnung <Mathematik>ExperimentalphysikUnrundheitWeb SiteBitMathematikMAPImplementierungGerichteter GraphFramework <Informatik>Elektronisches ForumVollständiger VerbandAutorisierungDeklarative ProgrammiersprachePackprogrammPunktGrundraumCoxeter-GruppeMathematisches ModellDesign by ContractCASE <Informatik>WhiteboardBefehl <Informatik>Fakultät <Mathematik>PhysikerPhysikalismusFlächeninhaltProzess <Informatik>Einfach zusammenhängender RaumSelbst organisierendes SystemPeer-to-Peer-NetzBasis <Mathematik>VersionsverwaltungRechenschieberWort <Informatik>Message-PassingMultiplikationsoperatorDatenfeldGrenzschichtablösungFormation <Mathematik>EntscheidungstheorieProgrammfehlerVorlesung/Konferenz
08:18
Elektronisches ForumBitrateHardwareVirtuelle RealitätSoftwareSoftwareentwicklerGruppenoperationInformatikVektorpotenzialVerdünnung <Bildverarbeitung>MathematikMultiplikationsoperatorDatensatzNormalvektorKartesische KoordinatenInteraktives FernsehenPunktSchreib-Lese-KopfMultimediaOrdnung <Mathematik>ÄquivalenzklasseTrägheitsmomentFlächeninhaltDatenbankWasserdampftafelSelbstrepräsentationKontrolltheorieOffene MengeLeistungsbewertungGrundraumGüte der AnpassungSystemverwaltungMereologieAlgorithmische GeometrieSichtenkonzeptParametersystemSchnittmengeMAPLeistung <Physik>Dienst <Informatik>MathematikerDifferenteAnalysisSkriptspracheMathematisches ModellHypermediaAutomatische HandlungsplanungWinkelAutorisierungBefehl <Informatik>EinsMedizinische InformatikPackprogrammProzess <Informatik>BitVorzeichen <Mathematik>BildschirmmaskeBildbearbeitungsprogrammBesprechung/Interview
16:35
Elektronisches ForumBitrateMultiplikationMenütechnikOffene MengeAutorisierungDatenfeldMathematikBildbearbeitungsprogrammSelbstrepräsentationPunktwolkeWeb SiteTermInformatikDatenaustauschResultanteQuellcodeProgrammierumgebungZahlenbereichBitOrdnung <Mathematik>Befehl <Informatik>OrtsoperatorRechenschieberOptimierungDifferenteMAPWhiteboardPackprogrammDatenreplikationImplementierungBeobachtungsstudieEinsMultiplikationsoperatorPunktDreiecksfreier GraphWort <Informatik>AssoziativgesetzCodeDomain <Netzwerk>SchnittmengeArithmetisches MittelMomentenproblemRichtungAlgorithmische GeometrieStatistikDokumentenserverTabelleWasserdampftafelFramework <Informatik>DateiformatHauptidealNichtlineares GleichungssystemErweiterte Realität <Informatik>Spiegelung <Mathematik>GruppenoperationFormale GrammatikVersionsverwaltungApp <Programm>KontrolltheorieWissenschaftliches RechnenWellenpaketVorlesung/KonferenzBesprechung/Interview
24:53
Elektronisches ForumHill-DifferentialgleichungBitrateSingularität <Mathematik>MittelwertMAPStandardabweichungOrdnung <Mathematik>Pay-TVDesign by ContractAutorisierungp-BlockMereologieGeradeGrundraumBefehl <Informatik>DatenfeldKörper <Algebra>Offene MengeInformatikMathematisches ModellParametersystemUnternehmensmodellCASE <Informatik>Projektive EbeneAutomatische HandlungsplanungBitMultiplikationsoperatorPunktProzess <Informatik>ZahlenbereichGruppenoperationMathematikerBeobachtungsstudieEinflussgrößeSystemverwaltungWellenpaketKlassische PhysikDatensatzDifferenteKonfiguration <Informatik>LastSpeicherabzugHilfesystemStrategisches SpielSoftwareRechter WinkelSchnitt <Mathematik>RechenwerkAbundante ZahlAlgorithmische GeometrieDomain <Netzwerk>SoftwaretestAnalysisStatistikAlgorithmusMathematikÜberlagerung <Mathematik>DateiformatKomponententestMaß <Mathematik>TermElektronische PublikationElektronische BibliothekArithmetisches MittelPhysikalismusSummierbarkeitQuick-SortPeer-to-Peer-NetzSchnittmengeSkriptspracheNotebook-ComputerApplication sharingAutomatische IndexierungStabStatistische HypotheseUmwandlungsenthalpieExpertensystemp-WertPlancksches WirkungsquantumFormale GrammatikPhysikalische TheorieSystemprogrammierungUnrundheitValiditätLinienelementWort <Informatik>Vorlesung/KonferenzBesprechung/Interview
33:11
Elektronisches ForumMagnettrommelspeicherProjektive EbeneGrundraumSystemprogrammierungCodeInverser LimesMathematisches ModellMereologieAutorisierungTheoretische InformatikFreewareInstantiierungApplication sharingWeb-SeiteDifferenteCASE <Informatik>IdentifizierbarkeitMultiplikationsoperatorWeb SiteGrenzschichtablösungParametersystemProzess <Informatik>OrtsoperatorPunktDickeOffene MengeKörper <Algebra>ResultanteInformatikRückkopplungEntscheidungstheoriePay-TVZahlenbereichSichtenkonzeptDatenfeldMathematikPeer-to-Peer-NetzStichprobenumfangAnalysisRechter WinkelLinienelementRoutingMustererkennungBitOrdnung <Mathematik>Neuronales NetzSignifikanztestPhysikalische TheorieAlgorithmische GeometrieKette <Mathematik>SoftwaretestFunktionalSequenzdiagrammSoftwarePhysikalismusNegative ZahlFlächeninhaltVirtuelle MaschineDeskriptive StatistikBildbearbeitungsprogrammObjekt <Kategorie>WellenpaketCliquenweiteGeradeAlgorithmusGüte der AnpassungPivot-OperationDeklarative ProgrammierspracheQuick-SortMessage-PassingSchlussregelBefehl <Informatik>SystemaufrufSelbst organisierendes SystemMathematikerAdressraumEinfacher RingVollständiger VerbandWhiteboardVorlesung/Konferenz
41:28
Elektronisches ForumBitrateKonvexe HülleProzess <Informatik>MathematikMultiplikationsoperatorInformatikBitKartesische KoordinatenRechter WinkelPeer-to-Peer-NetzReverse EngineeringLeistung <Physik>ResultanteBeobachtungsstudieDatenfeldAutorisierungTheoremStichprobenumfangGruppenoperationSystemprogrammierungDruckverlaufPhysikalische TheorieMailing-ListeCodeDifferenzkernFlächeninhaltRichtungCookie <Internet>OptimierungRankingMereologieAlgorithmische ProgrammierspracheGüte der AnpassungPackprogrammSelbst organisierendes SystemBoolesche AlgebraServerCASE <Informatik>ObjektverfolgungStrömungsrichtungZahlenbereichMomentenproblemIdentitätsverwaltungKeller <Informatik>LastSystemplattformInstantiierungAlgorithmische GeometrieOrdnung <Mathematik>TransaktionsverwaltungStatistische HypotheseEinsEntscheidungstheorieGesetz <Physik>p-BlockWeb logKondition <Mathematik>Automatische IndexierungGrundraumBildschirmmaskeOffene MengeMathematisches ModellLinienelementFreewareMathematikerMustererkennungGeradeDatensatzGerichteter GraphÄußere Algebra eines ModulsAdditionDatenbankPunktVersionsverwaltungDifferenteBildbearbeitungsprogrammTesselationInformationsspeicherungArithmetisches MittelParametersystemGarbentheorieVorlesung/Konferenz
49:54
Elektronisches ForumBitrateSingularität <Mathematik>MIDI <Musikelektronik>Konvexe HülleE-MailProzess <Informatik>Mailing-ListeDatenreplikationTechnische InformatikMultiplikationsoperatorDifferenteMAPMathematikARPANetTaskGüte der AnpassungWeb-SeiteInstantiierungBildbearbeitungsprogrammProjektive EbeneNatürliche ZahlInformatikDatenanalyseSelbst organisierendes SystemQuick-SortZählenMereologieSoftwareTheoremLinienelementOffene MengeResultanteZahlenbereichSystemprogrammierungMathematikerKontextbezogenes SystemDatenfeldEin-AusgabeWeb SiteSpiegelung <Mathematik>DatensatzCASE <Informatik>GruppenoperationDifferenzkernCoprozessorPhasenumwandlungTermMomentenproblemAlgorithmische GeometrieBeobachtungsstudieEinflussgrößeDienst <Informatik>ParametersystemExpertensystemMustererkennungBericht <Informatik>CAMFormation <Mathematik>FreewareGrundraumForcingWort <Informatik>SichtenkonzeptRichtungAutorisierungOrdnung <Mathematik>EinsBesprechung/Interview
58:54
Elektronisches ForumInformationsmanagementDualitätstheorieLokales MinimumBitrateAutomatische HandlungsplanungÜberlagerung <Mathematik>SchnittmengeSystemprogrammierungAnalysisDatenfeldPunktwolkeElektronische PublikationProzess <Informatik>RechenwerkUmwandlungsenthalpieCodeMathematikDifferenzkernVersionsverwaltungMomentenproblemGruppenoperationOffene MengeSchnitt <Mathematik>Quick-SortKlassische PhysikDomain <Netzwerk>HilfesystemApplication sharingMultiplikationsoperatorCASE <Informatik>Algorithmische GeometrieZahlenbereichDateiformatAlgorithmusSpeicherabzugFormale GrammatikValiditätWellenpaketPhysikalismusDatensatzWissenschaftliches RechnenCloud ComputingInformatikSoftwareMAPComputervirusSummierbarkeitLastPhysikalische TheorieTermStatistikKomponententestAbundante ZahlMathematikerStrategisches SpielRichtungDifferenteMereologieDokumentenserverMeta-TagNetzbetriebssystemProjektive EbeneEreignishorizontBesprechung/Interview
01:07:53
Elektronisches ForumPrimzahlzwillingeZahlenbereichExpertensystemBitSystemprogrammierungEinfacher RingAnalysisGrundraumSchiefe WahrscheinlichkeitsverteilungStatistische HypotheseFlächeninhaltStabAlgorithmusPunktNegative ZahlPivot-OperationResultanteMathematisches ModellBefehl <Informatik>Deskriptive StatistikGeradeNeuronales NetzGlobale OptimierungProzess <Informatik>Automatische IndexierungMessage-PassingOrdnung <Mathematik>Peer-to-Peer-NetzWellenpaketWhiteboardMathematikSelbst organisierendes SystemInstantiierungDeklarative ProgrammierspracheVollständiger VerbandMustererkennungSchlussregelMathematikerLinienelementGüte der AnpassungKette <Mathematik>Numerisches VerfahrenAlgorithmische LerntheorieMultiplikationsoperatorWort <Informatik>Formale GrammatikMAPMulti-Tier-ArchitekturSequenzdiagrammBesprechung/Interview
01:16:52
Elektronisches ForumKonvexe HülleNim-SpielMagnettrommelspeicherTheoremDatenfeldResultanteLinienelementAutomatische IndexierungServerOptimierungMailing-ListeSelbst organisierendes SystemVorzeichen <Mathematik>RichtungSystemprogrammierungProzess <Informatik>InformatikPeer-to-Peer-NetzDatenbankComputervirusGeradeZweiVersionsverwaltungMathematikMulti-Tier-ArchitekturArithmetisches MittelDruckverlaufBeobachtungsstudieLeistung <Physik>Äußere Algebra eines ModulsEinsMultiplikationsoperatorGüte der AnpassungStichprobenumfangRankingMathematikerPunktSoftwaretestReverse EngineeringPhysikalische TheorieTransaktionsverwaltungAutorisierungCookie <Internet>StandardabweichungGesetz <Physik>GruppenoperationOffene MengeMathematisches ModellVorlesung/KonferenzBesprechung/Interview
01:25:52
Elektronisches ForumMIDI <Musikelektronik>URNNotepad-ComputerInformatikProzess <Informatik>ExistenzsatzMereologieMailing-ListeE-MailMathematikDienst <Informatik>MathematikerBildbearbeitungsprogrammWeb-SeiteQuick-SortDatensatzBesprechung/Interview
01:29:54
Elektronisches ForumKonvexe HülleMIDI <Musikelektronik>MultiplikationsoperatorTaskWort <Informatik>Besprechung/InterviewComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:01
The first session in the afternoon will be a panel discussion on the subject of the future of scientific publishing.
00:30
The panel is semi-complete. By this I mean that all the panelists are in the building but not everybody is on the stage.
00:40
But we can be quite optimistic about the final further outcome. The panel will be moderated by Gerhard Meyer. I'm from the Max Planck Society in Berlin. The other panelists will be introduced in due course and I don't want to steal any more of their time and pass on to Gerhard Meyer.
01:07
Thank you very much. I don't think I should use two microphones and I hope this microphone works. Let me first start by thanking the organizers of this forum to reserve the time
01:24
to have a panel discussion on this topic that I think is of utmost importance. It's in particular of utmost importance to the very many young people in the audience. Although we are sitting here with a panel and we will start the discussion and we will start with statements from our side.
01:43
This discussion is going to be most useful if in particular also the young people in the audience stand up and vow their concerns, come up with their questions or maybe suggestions on how things can be improved. At the beginning of this I invite you all to do this, to think about issues that are important to you.
02:06
What we thought we will do in this panel discussion, each of the members of the panel will briefly introduce him or herself and also saying what the connection is to the scientific publishing business.
02:24
We also all made kind of an introductory statement that we would like people to know and that might be a good basis for discussion. We are sitting in this order, we will show the different slides.
02:43
I will just start with saying some words on myself and how I got into this and then also my statement will be shown up there. My name is Gerhard Meyer, I am Dutch, I am a director at Max Planck Institute in Berlin, the Fritz Haber Institute of the Max Planck Society.
03:03
I am a physicist, experimental physics. The Fritz Haber Institute's kind of research is on the border between chemistry and physics. I got involved in open access of scientific publications when I joined the Max Planck Society in 2003.
03:20
Actually my colleague director then organized the first Berlin Open Access meeting in which the Berlin Declaration on Open Access was signed in 2003. I was not very actively involved in that in the beginning. In 2012 I moved back to the Netherlands and I became president of the Radboud University in Nijmegen. At that point I coordinated the activity on behalf of all the Dutch universities in the negotiations with the bug publishers.
03:49
We found we were of the opinion that things should be changed in the present publication model and that we should negotiate different contracts with the publishers than we had done in the past.
04:02
In particular that all the publications in that case from Dutch institutions should immediately be published open access at no extra cost for the researcher. In 2017 I came back to Berlin and I have been involved in the German-wide negotiations in the framework.
04:24
The German's note is the DEAL project. DEAL stands for Deutsche Allianz Lissenzen. National licenses to be negotiated with the publishers on access to the journals and on open access of scientific publications. That has been my involvement.
04:42
The statement that I would like to make at the beginning is up there. I always start by saying that dissemination of scientific results is an integral part of a research project. That is why the cost of scientific publication should be carried by that same research project, that is by the authors or by the funding institution.
05:07
It should then be freely accessible for everybody to read and to reuse. In addition, the copyright should stay, I would say, where they belong, namely with the authors, and the cost of scientific publications should be transparent.
05:22
Okay, so enough said from my side. We're just going to make the round, so I pass on to if himself enough to introduce. He hardly needs to introduce himself, but he will nevertheless do this and maybe tell about a side of his that you all know a little bit less about. Thank you. My name is Yefim Zelmanov.
05:44
I am a mathematician, and I am a member of many editorial boards of mathematical journals. I think that the open access model where authors pay absolutely does not work in the field of mathematics.
06:02
It may work perfectly in biology, chemistry, and experimental physics. In several editorial boards, we were recently informed by publishing houses that we switch to this open access model, and this is not negotiable by Springer, by de Gruyter.
06:22
By the unanimous decision of editorial boards, we switched publishers. Okay, that's a very clear statement and a very interesting viewpoint. So one panel member unfortunately is still missing, although I saw her before here, but then we just pass on to Julia Williamson.
06:48
Hi, my name is Julie Rico Williamson, and I'm a faculty member at the University of Glasgow. I'm in the area of HCI for non-planar displays, but my interest in publications kind of started when I first joined the ACM Future of Computing Academy.
07:02
Some of you here in the room might actually have applied to our recent recruiting process. Best of luck, it's a wonderful organization to be involved with. And so when I first joined the FCA, I chaired the Future of Publications Working Group, and we were very interested in things like bias in peer review, open peer reviewing, and open science,
07:20
and that's how I got interested in these kind of publication matters. I joined the ACM Publications Board about two years ago, and I sit on the Ethics and Plagiarism Subcommittee and on the Digital Library Technology Subcommittee. So I'm very interested in the archiving practices in the digital library and policies that govern all matters of publication, from conflict of interest to author name changes to retraction and withdrawal policy,
07:46
and just very interested in this kind of how we deal with publications at the ACM level. But I also am the SIGCHI Vice President for Publications, so I have a lot of experience with some of the practical implementation of these policies as well.
08:01
And for my SIGCHI work, I think a lot about author experience and accessibility and archiving practices and how that influences and changes the author experience, and often we're seeing more and more work being put onto authors, which is also a challenge, and we're thinking about the cost of publication. So there's kind of two main issues that are close to my heart,
08:21
and the first one is one that specifically deals with early career researchers, so I'm very happy to represent early career researchers as well on this panel. But I think that we really need to think about a culture of publishing less, not only so that we produce fewer publications of higher quality, but also to think about things about strain on the reviewing community
08:43
and the kind of potential dilution of the scientific record. I think it's important to think about what's the size of a contribution and what's the reason that we choose to publish. The second thing, which is probably closely related as well to open access, is also just open science practices more generally, an open culture of sharing ideas, being transparent, making data sets available,
09:05
making analysis scripts available, making research reproducible and very transparent from the very early stages of planning an experiment or planning some work. And I think this is really important, not only because of the open access issue, but also to increase public trust in science
09:20
and making sure that all of our practices are open and available and reproducible. Thank you very much, and then we continue with Klaus Huleck. My name is Klaus Huleck. I'm a mathematician from Leibniz University in Hanover. I've been involved with publishing from many different angles,
09:41
obviously as an author, as a referee, as an editor, but I was also vice president of research of my university, and at that time the library of Hanover was part of my portfolio, and this is the National Library for Mathematics, Sciences and Engineering. So I saw it from the library point of view.
10:01
I am currently editor-in-chief of ZB Math. ZB Math is the European equivalent of MathSciNet, so we are the biggest database for mathematical publications and reviews, and we're obviously, from that point of view, very much interested in how will publishing change.
10:24
And finally, I'm currently vice president of the German Mathematical Society, BMV, and just yesterday we passed a motion on the use of bibliometrics in research assessment. I would like to make three points. The first is something we practically all can agree on, or easily can agree on, most of us at least.
10:48
Publicly funded research should be publicly accessible. If we discuss that with colleagues, I get very little, I do not expect anybody to contradict me.
11:01
And then a little bit later on in the discussion I say, well, there is open access, open access will do that for us, and then the discussion can get very emotional. Somebody might say, yeah, but I've done all the work, I've done all the time setting, and now I also have to pay for publishing. Why should I do that? That's a stupid model.
11:21
There's another answer one sometimes gets, and that's quite the high power argument in Germany, and we have in the Constitution the freedom of teaching and research. And I've heard people say, this is against the freedom of research,
11:40
because I want to decide where I want to publish it, and I do not want anybody to decide that for me. This is an argument I've not so often heard from mathematicians, but I have heard it from the chemists quite a bit, in fact. Mostly from the publishers, but sometimes also from science.
12:00
The last point I would like to make is that the role of publishing has changed very much. Fifty, sixty years ago, this was the dissemination of knowledge. You went to the library, you took out the journal, and you found out something new. This is no longer the way it works. We put it on archive, everybody can see it tomorrow. But we still have a long and complicated refereeing process, publication process,
12:25
and I think the whole purpose of that is quality control and evaluation. And it's important where young researchers, but also older researchers, publish their paper, because that influences their careers and their chances to obtain grants. And I think that adds a lot of inertia to the system,
12:43
and does make it much harder to change the open access. So that's my introduction. I would suggest that we now first welcome Gabrielle von Voigt as panel member, and we go two steps back and ask her to introduce herself and give a general statement.
13:00
Okay, first of all, I'm very, very sorry to be late. I do apologize for that. Okay, I would like to introduce myself. I'm not a person who won all these prizes like some of our honored guests did. I'm not these ones. I'm just a normal computer scientist. I studied computer science at the Technical University of Berlin. I wrote my PhD and my habitation in the application of virtual reality
13:25
and human-computer interaction within medicine. And I worked as a normal software developer in companies, in quite a few companies I worked. I worked as a production, head of the production for a multimedia company,
13:40
which was very, very interesting for me as a computer scientist to work together with all these creative people. But I also worked as a lecturer at UCL, University College London, and Crete, I was on Crete, which was a very good experience for me as well. And yeah, I worked in a university hospital.
14:01
I was the head of IT services there. And maybe one thing to mention at that time, I was the head of the introduction of SAP. SAP you might have heard of. Yeah, so we had unfortunately a year 2000 problem. You young people probably won't know really what this means,
14:21
but our hardware and our software basically didn't run after the beginning of the year 2000. Unbelievable, but true. So therefore we had exactly one year to introduce SAP all over the hospital as a big bang with 2,500 users, nurses mainly. You can imagine how nurses work with a computer, you might do.
14:45
Especially at that time, and doctors, and of course administration. Yeah, that was a nice challenge for me. And thank God for that we were successful. But as I said, it was a challenge, and I think I learned something from that one.
15:03
If you face a big challenge, go for it. That's basically one thing I learned. Afterwards I ran a computing centre at a university. Klaus Huleck and I, we worked together. I work at the Leipniz University as well, and now my group is called Computational Health Informatics.
15:20
So I think health is a good area for researchers to be at. But maybe, do I have one more minute? Sure. I get one more minute. Maybe one thing, I don't know if anyone else mentioned any hobbies. I'm quite into sports, why do I say that? I mean I used to start with ballet,
15:41
which you can't see anymore really from the way I walk. I swam first, I was a competitive swimmer, and I played water polo. The best thing I ever achieved was coming third in the European Championships, which is not that much, and now I play tennis.
16:01
Why did I mention it? For me personally, I need sports in order to sort of refresh my brain. So I did quite a lot of sports, I'm quite into sports, and whoever is the same, I just would like to encourage, keep on going, doing it. It's not only that your brain will refresh,
16:20
you also have the challenge to go for another competition. Not only the intellectual one, which we do normally at work. So, but that's only one thing I do. Next to my work, I'm the delegate from Germany within the e-Infrastructure Reflection Group. As I said, I'm a computer scientist.
16:41
I was nominated by the Ministry of Research and Education, and two and a half years ago I got elected as a chair from this group. So if you're interested in e-infrastructures, we look into these things within Europe.
17:01
So we just published a new publication on national nodes, which deals with basically the e-infrastructures within all the European countries. And this with respect to the EOSC principle, you might have heard of EOSC, that's the European Open Science Cloud.
17:21
And this open brings me, yeah, funnily enough now to my topic, open and publication. Personally, do I have my slides? Yes, I think open is not enough. I would go for fair. Not only because I come from sports and I like this term fair, but fair meaning findable, accessible, interoperable and reusable.
17:44
So that's for me much more than just open. What it means for us in the future in order to realize this principle is first of all technical aspects like the data exchange format, these totally normal things, but also the change of culture
18:02
and to give the people who make their documentation and their code and their implementation and everything fair, give them proper incentives. I think we have to work on us older ones in the near future. Thank you very much. And then Josef Konstan.
18:23
Yes, so I'm Joe Konstan. I'm professor of computer science and the associate dean for research in science and engineering at the University of Minnesota where science and engineering includes mathematics and computer science. But the reason I'm here is because I co-chair ACM's publications board
18:41
where we deal with overseeing our dozens of journals and the policies and operations of our hundreds of conference proceedings a year, our books program and the many other publishing ventures that we undertake. I'm going to try to focus not so much on a position
19:04
but to frame some of the issues. And I'm going to frame five. I've never heard a researcher say, I don't want certain people to have access to my publications. I think the challenge with open access is not whether it's how and it's how do we have open access for readers
19:23
without shutting off open access to authors, particularly authors who may not have the sources of funding for their research, who may be working alone or in environments or countries where the resources aren't there to pay for it. And how do we make sure that open access provides one of the other things
19:44
publishers still do today which is making things archivable and findable because if you look today a large number of papers not just from 20 years ago but from five years ago have simply disappeared when they were posted and someone reorganized a website
20:01
and then that domain went down and maybe they're somewhere but you can't find them anymore. Second, how do we move beyond preserving papers, sets of words and tables, to preserving the artifacts of research to support replication and reuse? How do we archive the data and the code? Third, how do we publish the important negative results
20:25
and the replication studies that move science forward? One of the things that you're going to see is that I really don't believe publication is important for its own sake. We don't need to publish because we need to publish. We need to publish because that's the best way we've found to advance our fields
20:44
and we need to make sure that publication is always secondary to advancing the fields and put to that purpose. Fourth, how do we build and maintain a community of reviewers? This is a huge problem today. If you talk to an editor or associate editor of a journal
21:00
they will tell stories of asking a dozen or 15 or 20 people to review some of whom come back and say I'm too busy many of whom don't even come back and say anything. If you look at the demographics, it's still the case in many of our fields that the majority of authors are in Western Europe and North America
21:23
I'm sorry, the majority of reviewers are in Western Europe and North America but the authorship base is growing in East and South Asia in a way that we have not yet figured out how to bring the reviewer community along at the same time to balance this out. This is a moment of crisis.
21:41
And last, I share the point that Julie raised of how do we address the quantity of publication so that the quantity of publication moves roughly in proportion to the quantity of significant advances in our field and is not driven by some vicious cycle of more is more for its own sake
22:01
rather than more is more to advance the underlying science. So as you'll see in my statement up there, I invite you to get involved in this conversation here today but as a professional as you move forward because the answer to these tricky challenges does not come from the people on this stage
22:20
it comes from the community building a consensus and then putting the work in to implement it. Thank you. Thank you very much for this introduction and this statement let's leave this slide up, let's leave the last slide up because this was also the reason that we did this in this order because this just addresses a lot of very important topics and questions that might be on your mind
22:43
so I guess we just, with your permission, we just leave this up. Actually sitting here I'm looking a little bit in the darkness I'm trying to see you as good as possible when you stick up your finger when you want to say something, ask something you can also just stand up, then I see you better
23:02
or you can go to one of the microphones and then I'll certainly see you and everybody who goes there and wants to raise a question or say something will get the word for sure. So I'll do my best to not oversee you but be aware that I'm looking a little bit in the darkness from this side.
23:22
We will start the discussion a little bit from our side I'll try to reflect on the different viewpoints we have heard I am very well aware that we are here with a special public I would say in different sense, also in particular mathematics and computer science
23:42
it is not representative for all the other fields the different research fields have different publication methods, cultures, histories different ways of archiving and so I clearly understand that not all the statements made work for all fields equally well
24:01
I would like to reflect actually on two statements that were made that you said open access authors pay does not work for mathematics and I also heard the statement why should the authors pay and I would like to say from my point
24:25
scientists publish for impact, not for money we are a very different crowd, we publish for impact, not for money a scientific publication is like advertising your work
24:40
if you put an ad in a newspaper you normally pay for this ad to be put a scientific publication is about disseminating your results nothing different than going to a conference, an international conference and present it to a whole audience it is interesting that most people take it for granted that when they are invited to an international conference
25:01
to present their work to an international audience then of course I mean the travel cost there and accommodation has to be paid from their own research budget people do not take it equally well for granted that the cost of publication, publication does cost something that is also part of the research project and should simply be paid from the research project
25:23
and thereafter be free for everybody so this is my answer to the question you posed maybe you want to react to it I didn't so much pose, this was not meant as an argument against open access it was just to highlight an argument which one often hears
25:41
and where there is a mental block sometimes where people just reject the ideas of open access because they have the fear I have to pay or because it could mean that some people who don't have the budgets or from other universities of developing countries might not have the chance to publish this is the argument one often hears
26:02
I think it will help if we make it very clear what are the different options, what does it actually mean open access there are many ways of organizing the process and putting this down on a more rational level rather than on this emotional level
26:22
so I did not want to use it as an argument against OA but just highlight it as something one often hears in this discussion but there is a point because we do have to make sure that everybody can publish even if they do not have the funds in their own pocket to pay for that
26:40
do you want to react to that? Oh yes, but before I react may I ask a question what do you think would be a reasonable price for an author to pay for a paper? Let me ask you another question and there is a very good reason for that that I ask you this question
27:01
because asking for a reasonable price only makes sense if you know what the standard is you need to know against what to measure it so I ask you, are you aware what the scientific community internationally, worldwide, pays in the subscription world per article?
27:22
No OK, without that it is a very tough discussion to say whether something is cheap or expensive the Max Planck Digital Library wrote a white paper it is the most downloaded paper from the Max Planck Society ever published in 2015 where to prepare for open access they said
27:42
well in the case of open access everybody knows what an article is going to cost because you pay per article but whether that is a lot or not depends on what did we actually pay in the past in the past the publishers got the money because different libraries paid subscription fees to get access to the journals
28:01
and so what they did they basically looked how much money is worldwide paid per year on subscription fees to all the publishers and that turns out to be 7.6 billion euros per year how many papers are there per year? a little bit less than 2 million between 1.8 and 1.9 million
28:22
that means, let's round it off to 2 million that means that the average price what all of us have been paying what the scientific community has been paying per article in the subscription world is 3,800 euros per article that's an average that's an average over all fields mathematics is special, computer science will be special
28:42
but this is an average what everybody has been paying not a single author has seen this because it was via the libraries but this is what the income of the publishers was for all the work that the scientists did so I would say, in answer to your question everything that is less than on average 3,800 euros per article
29:02
then we're better off than we were before and I think a reasonable price should actually be below 2,000 euros we have been ripped off by the publishers thank you, now I will continue thank you for the answer so you mentioned 2,000 euros when we negotiated with Springer
29:22
they mentioned 1,000 dollars now I understand that they were very generous but they said it could go up the average salary of a professor in Russia and Ukraine and many other countries is 500 dollars a month
29:41
so 2,000 euros is half a year salary they do not have any grants their work is the individual undertaking so if we switch to this model it means that we shut our journal for people from many countries in practical terms, that's what it means
30:02
so if I can respond to that I think you've now framed the two sides of this issue very well that nobody in Russia or elsewhere wants their papers not to be disseminated alongside
30:20
the top papers coming from Europe or North America but there's no way that you can charge people in certain places at the same level I think the challenge here is that we've spent so much time talking about authors paying and authors paying is not the solution
30:41
institutions paying may be the solution and institutions are not going to pay in an equal way across the world just as they don't pay to subscribe in an equal way across the world and if you talk about a future that will be a fair future it will be a future in which instead of
31:02
all of these institutions paying to subscribe all of these institutions pay different amounts to reflect their ability to publish the challenge to that today and the reason the transition is so hard is first, paying to publish is variable
31:23
particularly if you're at an institution that last year published two papers and you have a new mathematician who this year wants to submit five the numbers change rapidly and you need to smooth that out but the biggest reason this is a problem is it means that our top institutions
31:40
are going to pay substantially more than they've ever paid before we've looked at this in a field like computer science and let's understand what that means that means that places like EPFL or Tsinghua or MIT are going to pay a lot more than they used to pay in order to ensure that the work that they publish
32:03
in greater proportion than to what they read is readable everywhere you go around the world from Nigeria to Peru to Russia to elsewhere and probably a little more on top of that to cover the parts of the world that can't afford to pay
32:22
I think we're moving towards that consensus I think the governments in Europe in particular but elsewhere as well are moving towards that I don't think there's great disagreement I think it's just been unhelpful to have a bunch of very strong statements you can use Plan S as an example or others that are saying
32:43
wait a minute, authors can't publish here they can't do this, they can't do that rather than moving people towards what generally has a consensus but needs to be evolved into a reasonable business model to continue exactly with the statement that you have made
33:00
I mean the contracts that are being negotiated with the publishers now in different countries in particular also in Germany in the deal negotiations are along the lines that you suggest namely that the individual authors do not pay but that the money that was used in the library budget in the past to pay for subscription fees is paid to cover the publication cost
33:23
and there is enough money in the system to do that so that is the argument I very often indeed hear the point also from Heidelberg University for instance that the research in ten universities will in the open access system
33:40
have to pay more for publication than in the past and that is unfair I would like to turn it around the cost of publications is typically one and a half percent of the cost of a research project again average over all fields mathematics and computer science might be slightly different but this is what it typically is
34:02
and I would argue that the system how it was in the past namely that the small universities that hardly could finance their research and hardly could finance any research still had to use part of their research budget to support the library so that they could at least read what the others did
34:22
that was actually more unfair than the situation where we go to where those who do more research indeed also publish more and will have the associated cost as well I would like to make a comment on that I think read and publish agreements are really exciting and I definitely think that redistribution of wealth
34:41
is a really good issue, good idea but I worry how long that goodwill would last once everything is open our institution is going to be happy to continue to pay more and more when everything is open but the question is whether do you think it will be more and more when everything is open do people think it will be more and more when everything is open why should it be more and more
35:01
I think the cost of publication isn't going to go down so this money is still going to have to flow in from somewhere I would argue the following we have thus far as researchers done an amazing thing we have always given the copyrights away to the publishers the copyrights is the bedrock on which the publishers founded their monopoly position
35:21
they had a monopoly everybody that wanted this paper had to pay them because they had the copyrights as soon as the copyrights stay with the authors where they should be that monopoly position is gone it will be the first time that you get the free market working in the publishing world it will change things completely I think you are talking about an interesting and potentially exciting future
35:44
that says what happens for instance if you decouple reviewing from the final publishing steps so that I get my work reviewed it comes in with high reviews and I can go to several different journals and say look at this wonderful thing
36:00
will you publish it at your journal at a reasonable price and I can negotiate that but I think we also need to recognize and this is where some of these other issues come up that there are parts of the publishing model that are broken today because we have tried to not pay for them and the biggest part of that today is probably the reviewing
36:21
it's not simply that you put the effort into writing and formatting it's that a bunch of other volunteers sitting out there put the effort into giving you the feedback step after step and making the editorial decisions to decide this paper was a paper worth disseminating and I think we are going to need to recognize
36:42
that in the future that may be something that has to be paid for as part of the publishing process I'm going to live up to what I said that we give people from the audience a chance to ask a question when I see they want to ask a question so I'm even going to bring her the microphone I hope this one is on or can be switched on
37:01
Yes, thank you I want to know what the panel members think of a new system that I think IEEE has recently implemented among others for FOX which is one of the big prestigious conferences in theoretical computer science they established a model where the page limits like what your page limit is depends on how much you pay
37:24
like the first 25 pages are free and for every page you go over that you have to pay $5 so basically they are eliminating page limit for rich people and poor people have to suck it up Who wants to react?
37:41
I'll react I hate it We've had extreme disagreements with our otherwise collegial colleagues at IEEE over some of the publications ACM and IEEE share together precisely for the reason that you say 1. We believe in the long run the idea of page limits
38:02
where most publication is digital doesn't make a lot of sense but 2. If you're going to have it it should be based on what the work merits and not based on who has the money to pay for the extra pages as a backdoor way of providing more money into the publishing
38:20
Klaus-Hölling I think I object to that quite vehemently because one of the results is to publish shorter papers but many more papers which means more publications and this is a point you have made I think the sheer number of papers should not count
38:43
If a paper has to be long it has to be long, it might be a complicated matter which we deal with, this is fine and then it should stay long but to any kind of mechanism which produces more and more papers I think is detrimental and I would object to that
39:00
Just one reaction before I allow that question I do know that several it doesn't really address your question exactly but I do know that several open access publishers they have an APC, article processing charge that is dependent on the length of the article which somehow in a way makes sense because you probably have more work with it
39:20
when it is substantially longer but it is not intended to block certain things it is just kind of a price that is you pay the regular price for a 15 page article and when it is longer than that they count something extra There is another question from the audience You mentioned copyright as a bedrock
39:41
but what is the value of copyright if you have free access? Unless Steven Spielberg makes a movie of Efim's papers there is no value to copyright, is there? Well, I mean, what is the value to copyright? That is a good question As you probably heard about Sci-Hop
40:01
the Kazakhstan site where a lot of papers are freely available that violates copyrights Elsevier sued them in 2016 in the US If you sue a site like that you have to identify article by article for which you hold the copyright that somebody else made available to everybody
40:22
So they sued for exactly 100 articles and identified those 100 articles They were, of course, Elsevier won that case and they were given 15 million US dollars in penalties for which I would conclude these copyrights are worth 150,000 dollars per article
40:40
and with free access the copyrights stay with the author so that is a big advantage of open access that the copyrights stay with the authors and normally under a Creative Commons license you can select how you want to do it
41:02
Does anybody else want to react to this? Maybe just on the copyright thing I mean, as a computer scientist it is not just only the publication, what you write At least for me it was not my code That was the thing I did I mean, I implemented things and I wanted to have a copyright, of course, on my code and I think that copyright is very, very important
41:23
from that point of view or at least for computer scientists Hi, I have a question basically you are about to start to talk about the reviewers and the reviewing process so perhaps it is a bit far ahead but I was just wondering so both as a scientist and as a reviewer
41:43
I keep hearing conferences like NURBS, for example not having enough reviewers and also, you know, for example when you are applying for funding in recommendations, quite always I see people saying that the people reviewing your application are going to see it for maybe 10 minutes so, you know, you should really write it briefly, etc.
42:03
So, my question for you I guess for all panelists would be Do you think that right now we, both in mathematics and computer science are experiencing either the shortage of reviewers or the shortage of quality reviewer time and what should we do about it?
42:21
Should we move towards some kind of automated reviewing process or something else? Thank you Yes, I think that it is a huge problem that papers are becoming extremely long and difficult it will take a year of reviewers' work to read it and therefore
42:41
it doesn't make sense to talk about pay How can you pay for a year of the work? There is no universal solution editor tries to find referees bags Klaus, one or two, react
43:00
The question is can we make reviewing part of the track record of a researcher so that you get some acknowledgement for what, for the work you've put in There is one attempt I'm not sure it will be successful There is something called Publons where you can register
43:21
and then at the end you get the record of how many papers you have refereed The young people I've spoken to are not very enthusiastic about it but it's at least theoretically a way of creating some kind of recognition for that work If you have some ideas in this direction
43:41
I think that would already help because it would then be part of your career and it would therefore not work in vain as many people think it is at the moment How many people do know about Publons? Tracking doesn't work until people value something
44:02
For those of you in your first jobs think about whether the person you report to is going to at the end of the year ask the question have you reviewed enough and well enough or if that person is going to ask you about other parts of your job
44:20
and assume if somebody does the reviewing it doesn't have to be you There is a certain amount of reviewing that's selfishly beneficial We see this with grant proposals that if you review a few proposals if you sit on review panels you learn how to write them successfully and people volunteer and then later when people are successful
44:42
they often stop volunteering because they don't see the benefit I think there's lots of mechanisms there Economics can teach us many lessons We can require within certain communities that if you're going to author you need to also provide a certain number of reviews We can do financial compensation and rewards
45:01
lots of mechanisms but it starts with valuing it and probably the biggest mechanism we should be thinking about is how do we train and nurture the people who are willing but not yet very good at reviewing so that we don't simply throw away their volunteer effort and say
45:20
this isn't a good reviewer but instead turn them into a good reviewer Is there anybody else from the panel who wants to react? I see this for example in my main conference at the SIGCHI flagship conference that submissions are going up at over 10% or so per year and the reviewer pool is growing at about 2% so this is a completely unsustainable situation
45:41
and the stakes are so high to get one of these papers that people are submitting more and more and more papers which makes the problem even worse and it just goes back to I think publishing less and submitting fewer papers is something that we have to start valuing Point you made earlier Next question Hi, I'm not sure how to put this question
46:00
but let's imagine that I I think that open science is much more than just making one's work available to everyone but it also means that everyone should have the possibility to make their work available to everyone so let's imagine I want to change this so Professor Constant I think
46:20
said that we are the ones that should make some change make some effort in changing but how can we change something when in news sometimes we see some universities trying to negotiate with these big publishers and usually that doesn't go very well how if big universities try to negotiate with those publishers
46:41
they do not achieve much how can we achieve whatever we believe is correct thank you I'll give a very short answer to this one there's five or six steps you can take as an individual that there's no university I've ever heard of that will stop you make a personal decision that you're going to engage in certain open practices
47:01
register your hypotheses in advance if you're doing hypothesis driven research make the decision that you're going to publish the work that you have with an open preprint whether that's archive or another server in your field decide you're simply not going to publish in a place
47:21
that doesn't allow you to do that and put your code your data whatever it is that you have that goes with your paper in a repository whether it's related to the publisher, your institution or for the area of work that you work in there are enough publishers the vast majority of them
47:42
will allow you to do all of that and still publish your work under whatever agreement your university has and then anybody who would like can point people to your work in its preprint form or in its published form they can point people to the things they need to build on your work and you know that you're doing the right thing
48:02
If I just may add to that I think it's extremely important that every individual does what he or she can do in that field and for those that are convinced that open access of scientific publications is the right way if nobody would review anything but only open
48:20
access publications the rest would be gone pretty fast I'm very happy to see many more questions coming up We do it in order Thank you for this section My name is Jim Abdugani from Nigeria My question is Late April this year I presented my research in a conference in my university
48:41
but the problem I have is you said I should pay for the publication The question I'm having for the panel is is HLF providing a platform for us to publish our paper for free? Ask my question Who wants to So I think
49:01
in the different fields there are of course platforms where you the archive where you can put this but also I would say that open access journals normally have a budget and this counters the argument that people that cannot pay for it are not treated
49:21
correctly so open access journals normally have a budget for those who cannot pay the cost of publication from their own research budget. Every open access publisher typically has between 5 or 10% of their budget reserved for that and the conclusion is they actually never need it. So if this case comes up you send it to a journal you ask
49:41
for the specific funds probably it can be done like that as well but maybe some others know better what is specific platforms in this field I cannot There are some by now quite good quality journals which are truly open access and which do not cost
50:01
anything to the author. We can ask where the money for that comes from that's typically some subsidy from somewhere else because money has to be invested you don't get publishing for free but you can do it and this also I wanted to come back to the question what can you do to achieve open access or achieve that
50:21
we move more towards culture where everything is openly available. Of course we could theoretically only publish an open access journals or we could use journals and flip them and make them open access Timothy Gowers proposes that there is practically
50:40
it's very very difficult because you have to publish in good quality journals and good quality journals need a long time to establish themselves so this forces young researchers to publish in certain journals and these certain journals are very often the traditional ones and of course the names and the brand
51:02
name is owned by a publisher so this is why one of the reasons why it's very very time consuming and difficult to make this transition but there are ways there are possibilities to influence that and it's easier for a senior colleague to publish in open access because he or she will be less
51:23
dependent on this immediate recognition than say young researchers Next Hi so I'm really happy that some of the panelists mentioned the publish or perish culture because I always hated that culture
51:41
because I didn't get into research to become a paper machine and that's exactly how I feel sometimes I'm just there to bring attention to sponsoring institutions, agencies and so on right? But of course if we want to get away from this culture we need to talk about metrics because as a computer scientist I think one of the most important pieces that
52:02
brings impact to people is software and I went to a workshop a few years ago and one of the keynote speakers even mentioned I can't get tenure by writing software because no one is actually investigating how is the impact of that software
52:20
in other people's research so then my question to the panelists is do you know if there is any work in terms of metrics, how we can actually measure the impact in other ways that is not the number of publications or even the top venue that you publish? I guess that's a question for the e-infrastructure reflection group Oh yes, it's a very good question
52:42
unfortunately I don't have the answer I'm very sorry I've got one comment on that which is something that I really like which is artifact badging, this is a review process that for example ACM has in certain communities to review those artifacts once your paper is accepted and actually archiving the artifact alongside the publication so you have this other material that's there
53:02
and you can search by badges and the badges have different levels of reproducibility and openness and I think this is a kind of nice first step in that direction of showing the value of research artifacts beyond the printed word which is often not the huge part of the publication My personal experience is that only at the top, really top universities
53:22
they don't count papers you can get tenure with one or two papers they are sufficiently self-assured to give it to you if they think that the papers are worth it I would like
53:41
to mention one thing in ZB Math we also have SW Math this is for software and this is a way how you can see which software has been used in which mathematical papers so this is one way how you can demonstrate that say software or some other
54:01
artifact has been valuable to other people there is also a long discussion at the moment about research data and then we come into all these questions of how do we quote site software, how do we cite research data exactly in order to of course reference them
54:21
properly but also demonstrate the value of this work that has been done that's an ongoing discussion but it's certainly an important one so I want to push back on your argument that you need metrics your institution may believe it needs metrics but your institution is filled with a bunch of
54:41
people with expertise who will ask other people with expertise about your work and there is no replacement for reading papers and talking to experts about the impact that a piece of software has had that said there have been certain documents, manifestos over the years that have helped
55:00
people who do experimental work make the case as to why they should be promoted in the US probably the most influential was a national research council report on experimental computer science and engineering in the early 1990s that came out and said if you measure these people the way you measure everyone else you're
55:20
not going to have anybody in academia doing experimental computer science and engineering you need to value the artifacts you need to recognize that a single paper may take several years as you build up the artifacts and that was probably partially responsible for a thousand tenure cases including my own there have
55:40
been others who have taken much more sophisticated measures of software today of saying well we're going to now track download counts we're going to ask people what they do with that software and the cases I see today of people whose software is a major contribution are making not just a quantitative but a qualitative argument having
56:02
leaders in the field make the case that hey without this software we couldn't have advanced graphics we couldn't have developed this processor we couldn't have proved this theorem and that's the kind of thing that will advance you and if you're really stuck in a place that does nothing but respect counts then go
56:22
make sure that you're counting everybody who looks at every page you have and every download so that you have something to supplement the really important qualitative judgments of your work. My microphone is on, now it's on again. If I also might want to react to that
56:42
I think it's also very important to discuss this issue and it's good that there's awareness about this issue and ideas on how it should be done differently are very important. For instance in Germany the DFG the research funding organization has gone away from the fact that in the past
57:02
they always wanted for instance a list of publications now they just want your five most important publications and not a whole list or something but those papers that you think are important and they want to see the papers, they want you to send them. This is still not the final solution but it shows there is awareness
57:22
people are thinking of how we can improve the system and we really need all your input and that's why I'm very happy with the suggestions and questions that come up from your side to improve the system. Floor is yours. I'm a bit short for this. If I may just comment on the last
57:41
question myself before I ask my question in speaking about metrics for open science I'm currently I've just joined the replicats team and there's a very large project that's just begun that was spearheaded by Brian Nosek who was a big part of beginning the replication crisis
58:01
and the conversation about this and he ran a project about ten years ago that exposed as many of you would know the problems in nature and science and the replication and the reproducibility of results. He now has a grant there's a massive grant, a hundred million dollar grant by DARPA to run a study in, and what we're
58:21
trying to do, I've joined the reproducibility team as a data analyst and we're trying to develop a metric of reproducibility and replication for studies. This is the first phase, so we're in a sort of a qualitative data gathering phase at the moment, but I think that's a really interesting project that's happening in open science
58:42
and I don't know how well we'll answer it but I think it's an important question to be asking. If I may ask there, is that for all fields or is that specifically social psychology or something? It would be focused on social science, yes because it's coming out of Nosek and I'm working with the well I'm currently working with the philosophy
59:01
of science interdisciplinary meta research group at Melbourne Uni. So they're bringing in lots of different fields, I mean I myself am in computation and mathematics so it applies for the things but it's very focused on social science, this project yes. I think it's very interesting and that's exactly I think
59:21
the situation we have in Europe at the moment with the European Open Science Cloud so if you want to have open data if you have, let's call it a physics experiment, it makes no sense just to have the paperwork but not the data and the code and everything which you use to run this
59:42
experiment. Unfortunately if you think about it, it's not that easy to store the code because it runs on a certain operating system and all the software libraries don't exist anymore. So all this is at the moment discussed and I'm glad that you do the same exercise. So what they do at the European Open Science Cloud
01:00:00
try to offer these repositories. As you mentioned, at least you can put it in a repository. But I think at the moment, that's why I said I don't have the answer. I think we are exactly at that moment, we are on the stage that we have to develop something in that direction. Well, that leads directly to my question. Oh, perfect, thank you. So, I love this panel because my PhD
01:00:22
is in reproducible computing within statistics. So this is very much to my taste. And something that, you know, even in statistics and computing, we're not trained in this aspect of sharing data. There's a huge obstacle in terms of training where it's well and good to talk about best practices in scientific computing.
01:00:41
But most of us are not trained in formal computer science, version control, online repositories, cloud computing. I myself did undergraduate mathematics. I learned to solve equations. So my question for the panel, I'd be really interested in how do we disseminate these skills to researchers when that's not really their domain? I mean, even for those of us
01:01:01
in statistics and mathematics, our domain is not in that side of computation. Our domain is something else. So how do we develop good enough practices that are realistically adoptable by researchers who are not, first and foremost, a computer science who specializes in online access of data?
01:01:21
Who wants to take that question? I'll take the first cut at it, which is the other thing we're not trained at doing very well is asking for help because there are people out there who specialize in this. In our university, they're within the libraries. The librarians aren't just waiting to say, hey, can you help me find this journal article anymore?
01:01:40
They know you find it yourself. They're out there saying, hey, we can help you plan archiving of research. We can help you plan a whole strategy of dissemination. And what I've found is there's a whole network of these librarians, at least in the US and Europe, I don't know if this is worldwide yet, but I suspect it's growing that way,
01:02:02
who in their training are being trained not as classical librarians, but as data librarians. That's an analogous situation to the people I deal with on the medical side, where no medical researcher I know ever does a study alone.
01:02:21
They have a team and they have a statistician and they have a data person and they recognize that there are certain things that don't need to be their expertise. They'll rely on others. At least in our cases, these librarians are sitting underused, available for free. If people come to them, they ask, come at the beginning.
01:02:41
Come at the time you're making your research plan. Don't come after you did it wrong and ask us to help you fix it. We'll try, but come when you're making your research plan, when you're writing your proposal, and we will help you develop a viable plan that works with the right fields. I would also point out, though, that especially when you're talking about the crisis
01:03:01
that exists in social science, but this is true in parts of computer science as well, the crisis is not just about putting data out and making things amenable to replication. The thing that caused this problem in fields like psychology was the idea of worshiping a p-value and then at the same time,
01:03:21
having computing and experimental technologies that allowed us to run a million studies and convince ourselves that a thousand of them were really significant because they could only happen by chance one time in a thousand. And if we don't take the next step of saying that our research questions
01:03:41
are grounded in theory or separated from their exploration and their validation and actually registering those questions, hypotheses, and designs in advance will simply fall into the case of sharing and sharing more data that other people can't replicate anything from
01:04:00
because it was a fluke, and that doesn't help anybody. So, Julie, you wanted to react to that? Yeah, yeah, so, I mean, it's having the data sets and everything be open is not enough. It has to actually be usable. It has to be in a format that's reasonable, and I know probably most of us have data sets that if we looked back at them from 10 years ago, we have no idea what's in these files. But I think there's some tools
01:04:20
that can make this a lot easier. So things like, since I converted to doing my analysis in Python notebooks, that's made a huge difference. But it does mean the legwork that goes into those analyses have to be, there's a lot more, and there's a lot of trust, you know, that you're putting out there to give that level of scrutiny in your publications. So it's one thing to say, yes, I'll publish my R scripts,
01:04:40
or yes, I'll publish my Python notebooks, but that's putting yourself out there in a way that has not really been done so often before, and it can be a scary thing. But I think it's worth it. Thanks for that. I wanted to mention something. In this country, we are currently having a complicated process,
01:05:03
reorganizing our way, how we deal with research data. It's called NFDE, Nationale Foschungstaden Infrastructures, so National Research Data Infrastructure. So the government has said, we have a problem with research data, with the availability,
01:05:21
with fair usage, et cetera, et cetera. We have to organize that. Of course, it's an international problem, but the German government has said, we have to address that question. And currently, there's a big proposal, or a big sum of money which has been put on the market, and there are now various consortia
01:05:43
forming in the different fields. There's one in mathematics. There are in physics, I think, two or three. And these consortia try to sort of combine the various players and to address that question of how do we deal with research data? How do we store them,
01:06:01
make them usable? How do we actually manage to implement these fair principles? But it's a huge, huge job. And it's not just data, it's about our algorithms and how we build the trust. I mean, another aspect is unit testing and bringing badge counts for, say, in R, we use cover R to measure unit tests.
01:06:22
But I just did an analysis for my last paper and only one quarter of R packages have any unit tests at all. So I would also, I mean, as was also remarked by the first reaction you got, I think within the university, there's always a lot of knowledge on each of these specific topics. And I think better use
01:06:41
should be made of that in general, not only in the library, right? And at our university, we made the mathematicians indeed team up with the social scientists to get some things better straightened out and have discussion groups about specific topics. I would like to ask for the next question in a row. Thank you very much.
01:07:02
Hi, hello. When I've been thinking about like the most of the problems that we have been discussing, excess number of publication, the load for the reviewers, I always came up to one core problem. So which is what we measure and our inability to measure the quality of our work.
01:07:21
So simply the number of citation might not be the best way to quantify it. We can say do all citations created equally. And what we can do to make those systems more fair, like automated systems might have their own biases, reviewing process might have institutional biases. So where do we start to develop a new system
01:07:42
to evaluate or quantify the quality of science? Does this start from a publishing venues, academic institutes or researchers itself? You want to add? Okay, one word. Well, I formulate a meta-theorem
01:08:01
that every formal system, formal metric can be manipulated, beaten. Top universities, in top universities they read papers. At lesser universities, they rely on age index, number of citations, because it's just convenient for administrators.
01:08:21
They don't need to think. Yeah, but his point is right. Sorry, his point is from, yeah, it is right. Often, as you just said, the universities do rely, for example, on the age index because it's easy. You just go there and you get a number. What more do you want? Yeah, that's easy. So I do understand totally your question.
01:08:40
I think often it is done that way. So I think that the answer is, in the end, only experts can evaluate other scientists, right? And you need expert committees or expert other staff at universities to do this. It's a very time consuming process and that's why not for all levels that is done and it should be done, but there are more people
01:09:01
that wanted to react to it. Julie was first. Just a quick comment. I mean, one of the things that this whole publishing world lives because we have faith in the peer review process and we think it's a good process. And there's some really nice initiatives, so openreviewing.org, I believe is their name. That's a really exciting kind of idea because you do have then this visibility
01:09:20
of the work that goes into reviewing and recognition of that reviewing process and it's nice to see something like that happening, especially in a double-blind reviewing system. So there are some exciting initiatives, but I'm not sure if it solves the issue. But if you're a junior researcher, do you consider yourself not biased, reviewing a senior person's paper?
01:09:41
Because even in the peer review process, there are lots of intrinsic biases that we should address in order to quantify research in a better way. Yeah, well, there's a huge number of biases and that's why we see it's a little bit frightening when conferences like Neerip's do an analysis of bias and noise in reviewing, but we still have this baseline of trust that peer review is a good system
01:10:00
and I'm not sure if it is, but I mean, there are a lot of issues there. I think we simply don't have a better system yet and so I think Klaus wanted to say. Yeah, just yesterday we had a meeting of the board of the German Mathematical Society and we addressed that question. So we've passed a memorandum on how to use
01:10:23
or not use bibliometric data in research assessment and we've made it very clear. Well, one thing to be said is we have to be careful because we sort of all do it ourselves. So if we are on a job committee or if we are on a grant committee, we do look at how many papers,
01:10:42
where has the person published, we do look a little bit at citations. However, we made it very, very clear that practically every numerical algorithm can fail and can be manipulated and I fully agree to that and so the message we've sent in that statement
01:11:01
is never do anything without peer review. Now peer review can also have its pitfalls but we very clearly say never do it without peer review. I must say that. I'm all double blind peer review. I would add without double blind. In this country, we are reasonably fortunate. I do not know of many universities where they have these very rigid numerical algorithms
01:11:23
but I believe in some other countries it is different. So I think we should recognize that the fact that we're all inherently lazy and would love to have a number rather than read your papers is also something you can use to your advantage systematically in changing the system.
01:11:41
I mean harken back to those papers way back or the talks way back in the morning. We have technologies that you or someone else in this room could say, you know, what if I got a bunch of those top university assessments of people that were done with all the careful work and I dumped in all of the metric data I can find
01:12:05
and tried to learn a scoring system to put alongside the other metrics and it would be an interesting exercise to see if you could build one that people had greater confidence in than they had in the H index
01:12:22
or in some of these other things because we know those metrics have problems. They're skewed by subfield, they're skewed by a lot of things. I don't think we have to give up on metrics. I just think we can't rely solely on them. I totally agree but do you think if we train such a model using existing data do we learn what we should really value the research
01:12:43
or we will learn the intrinsic biases of the publication system or the editors or other things? I think what's critical is that the outcome you're training towards cannot be did they get papers in the top venues. It has to be something where you look back at career long impact and I'm fully recognizing the fact
01:13:03
that that data because of the system will have been skewed, it will have been skewed against women, it will have been skewed against disadvantaged minorities and we're going to have to figure out how to move beyond that skew and attempt to oversample the data in those communities
01:13:24
to be able to build something that can actually attempt to do something useful. None of that will replace the fact that the best places would rather rely on human judgment but it might at least give us a fighting chance that what we're not out there doing is encouraging rings of people to cite each other
01:13:41
in the hopes that we all get promoted because we now have lots of citations from our friends. So and the panel doesn't agree on all points. In this point I stick with Efim on his statement that I think any metrics will be really difficult. I point out that all universities in Europe
01:14:00
and all research organizations worldwide have probably all signed the San Francisco DORA declaration, declaration on research assessment where for instance rule number one is that citation impact score should never be looked at and still everybody uses it. So it is a problem. It needs to be addressed. I'm looking at the clock. We have 15 minutes.
01:14:21
There are several people standing behind you. I would like to give them all a chance. Thank you very much. Next question. Hi, thank you so much and the panel has been super insightful. My question pivots from the previous line of discussion. I wanted to know how as a community can we encourage publishing negative results
01:14:42
specifically in my area of research, machine learning and artificial intelligence. We mostly see how models, deep neural networks and other models has exceeded all baseline models but I think as a researcher it is super helpful to see what did not work in certain scenario and what objective function doesn't work
01:15:01
for a sequence data for example and I know from my experience that a three-layer neural network doesn't always work. Maybe a traditional Markov chain works better. How as a community, but papers don't get accepted unless we show something that beats magically all other baselines.
01:15:20
So, I would like to, yeah? Well, as a mathematician it's a little hard to address this question. So if I address a negative, it can be theorem. The circuit cannot be squared. And then of course it's a big result. So in mathematics I find that hard to answer
01:15:44
but in the experimental sciences I can see the point because you can then save other people doing the same work. On the other hand, sometimes repeating something and going slightly different can also be an advantage but maybe somebody could answer that which closes the experimental science.
01:16:01
Yes, so what I think from my field, experimental physics, I mean there's not a real journal where you would go publish negative results but a PhD thesis very often serves as that. And in a sense that a PhD thesis not only presents the positive results and the articles that came out but has a much broader scope
01:16:22
and a much wider description of also all these other things that are being tried. It used to be that these PhD thesis in the past were difficult to get to, were difficult to access. Of course that is now completely different. Many people argue that there should be journals specifically for these topics
01:16:41
but I've never seen them flourish. I've never seen them come up. So there seems to be some intrinsic limitation in this. So I don't know. I think we should learn though, in the health sciences it's the one field that does this well and they recognize as we need to that not every negative result is interesting.
01:17:02
If you really want to be able to publish a negative result you probably need pre-review before you did the experiment that has somebody come back and say the theory you're trying to test is sound. Your methods are sound. It will now be interesting if it turns out that you didn't find the result you expected.
01:17:23
In clinical studies you see this. Somebody says we have every reason to believe that aspirin's going to reduce your blood pressure. There's literature about this. We have the right power in our study. We have a reasonable sample. If we don't find it you get that published. If we do find it you get that published.
01:17:41
It's a nice system but I think we need to think about evolving to have for that kind of work a two-stage process where your design has been reviewed by peers who can acknowledge that yes, if you carry this out we believe a negative result would actually advance our knowledge in the field
01:18:02
and not just simply be evidence that you failed. If I fail to prove a theorem is it because the theorem's not true or because I'm really bad at proof? That's not a good negative result. You have to have some theory and a comfortable design behind it. There's actually the reverse exist in some field. There's a very famous journal in organic chemistry
01:18:22
where people always report to production the synthesis of a certain compound. Paper is only accepted if an other group independently reproduces the production of that same chemical compound. That takes a long time before the paper then is accepted but it's the most prestigious and highly cited paper in the field.
01:18:41
Okay, I'm trying to go on with it. I see three more questions at least and so I'll give you all the chance to, or two more. There is another question from the audience. Well, some people. Okay. Okay, I'm Raymond Seidel. I'm the scientific director of Schloss-Dagstuhr which for the mathematicians among you
01:19:01
is kind of the computer science version of Oberwarfah. So like Oberwarfah, we have a seminar program which is about the same size as Oberwarfah. In addition, we started running DBLP which is a bibliography database which many of you use and we also have a small open access publication program
01:19:25
which publishes something like 1,800 to 2,000 papers per year Now, we're mathematicians, computer scientists. We know how to deal with numbers but they're still abstract. Let me make things a little concrete.
01:19:41
If we were to receive 2,000 euros for every paper we publish, we would not just run our publication efforts. We would run the seminar program, we would run DBLP and we would have money left over. So this gives me the impression that the current system
01:20:03
where things are so expensive is actually a huge public subsidy program for investors in various publication companies and I think our natural reaction should be to get away from those companies altogether and find new ways of dealing with this.
01:20:22
When I say dealing with this, as one member of the panel said, maybe publication is not so much about dissemination of knowledge anymore but about getting cookie points for promotion or whatever.
01:20:40
So maybe we should look at the system in a completely different way. Knowledge has to be disseminated but this is relatively easy. It's more a question of how do you get the stamp of approval, what mechanisms do you have for that? Yeah, so you're absolutely right
01:21:00
and the 3,800 euros per article that I mentioned is a scandal, you could say. And why is that? Well, for instance, I mean the CEO of Relics, which is the company that owns Elsevier, he's paid out in pounds per year. The pound is at its lowest value now but it's still more than 10 million euros per year, right?
01:21:22
That he gets as a salary. This is research money. You probably don't get that running the business you're running, right? And so there is, the system needs to be changed. We need to come up with all kind of alternative and new models. Why am I personally still spending quite a lot of time
01:21:41
to negotiate with the major publishers? Because if we really want to change the field, they just have right now 50% of the market. And if we are realistic, we're only gonna change things if we find some compromise solution with them. This is not the ideal solution. It's not the final one. But at least it is better than what we had in the past.
01:22:00
That's how I see it. But I support all kinds of initiatives to do it differently, to do it more modern, to do it cheaper, to do it more transparent, and so on. That all needs to change. But we also have to be realistic that we are in the situation that we're now in. Maybe somebody else also wants to react and otherwise I go on with the next question. I also see there's a question from the audience,
01:22:22
but one in line first. You'll get also a question. Hello. Thanks for all the answers. I have a simple question that there's some SEI index sign citation index.
01:22:41
So it index both high-quality journals like IEEE transaction, ACM transaction, along with very low-quality journals. So in many places, any bad-quality journal and good IEEE transaction or ACM transaction
01:23:02
counted both as one. Means both has same value. Means good person and good researcher or bad researcher. Both got the equal advantage. So why IEEE and ACM-like organizations are not working on the direction
01:23:20
of creating a standard list that will create standards of A quality journal or conference, B quality journal or conference, C quality journal or conference, and after that there is nothing which is acceptable as a research. Means the other things were just online documentation.
01:23:42
That's it. Like a blog. So why IEEE and ACM is not initiating like SEI index? So I think there's a very short answer to that. Frankly, if IEEE and ACM got together to do that, that would almost certainly violate US antitrust law and laws in other places
01:24:01
and attempt to monopolize, even if we included all of what we think are good venues in other places from other publishers, and attempt to basically block people from getting credit in certain things would be viewed as anti-competitive. I can't speak for what IEEE does.
01:24:20
I know that ACM works with people who put together their own lists. For instance, the China Computer Federation. Its communities put together lists of what they think are good venues, and we try to work with them to help them identify from ours which ones we think are the ones that are the top venues and the second tier venues.
01:24:43
So that researchers will recognize these are places they want to publish. Everybody who makes a list has different criteria. Every country that makes a list has different criteria. Some want things that have high impact. Some want things that are highly selective and reject a lot.
01:25:00
And I think we mostly hate these scores and lists, but we've had a long discussion about metrics that I won't try to repeat. So why we don't give funds to CS ranking dot org-like organization? They will add more lists and have a big server working behind that and having lots of authors listed in it
01:25:21
which has relatively less quality publications. Like the CS ranking dot org-only listed, very topmost conferences, hardly 40, 50 conferences. So only authors of those 40, 50 conferences are being listed and rest after that
01:25:41
even it doesn't listed IWO ACM transactions. I mean, I wonder whether I understand it. Is your suggestion to make a list of some kind of ranking of conference proceedings or journals? Okay, so I can, in mathematics, some examples of that exist.
01:26:01
For example, there's a Norwegian list. The Australians tried it once. The Norwegian list is fairly harmless, I believe. And typically it's a distinction between A, B and C journals. We discussed it at the German Mathematical Society and we very soon found that we can't really agree
01:26:22
because it depends very much on which part of mathematics we are talking about, many other aspects. So we just gave it up. I know that the economists, at least that's what I've been often told is they do have an ABC list and they actually seem to believe in it and work with it.
01:26:42
But in mathematics, the discussions I've been involved in were not fruitful and in the end, we gave up on that. Okay, with that, if you have the time, there's still two questions. Quick questions, quick answers. Or one question, okay. I thought you were also gonna ask a question. Thank you.
01:27:03
I've been aware of what Gauss said when he was criticized for publishing very few papers. Paukam said mitturam, few but mature. And that's been my policy
01:27:20
since I became an assistant professor. And then when I moved to California and I was being paid by the taxes of the citizens of California, I found that the reviewing process and the publication process was extremely onerous.
01:27:44
And so I post things on my webpage. Now there is a service which tells you how often various things have been cited. I can't remember the name of it because I don't pay a great deal of attention. But every now and then,
01:28:01
something bursts into my email that says, this such an article achieved a new record. Whatever that was, but I don't follow it up. I'm not interested. Now, I could count the citations, I suppose, and say that gives us an idea of the quality.
01:28:22
But in fact, there are a lot of strange things on my webpage. For example, there's an article titled How Blabbermouth German U-Boats Got Themselves Sunk in World War II. Of course, I'm a mathematician and computer scientist.
01:28:40
I suppose it would be hard to explain why that's on my webpage. But I get all sorts of email from people who say, oh yeah, I was in on that, or yes, that happened to me, and so on. So I guess there are people who read that too. And then there is the paper
01:29:00
that I collaborated on, except I finally came to the conclusion that it was a bad idea, and I asked to have my name removed, but it wasn't. And somehow, that's got lots of citations too. Goodness, I guess people are indiscriminate,
01:29:23
and it may be that when we ask to have papers reviewed, an enormous burden falls upon the reviewer. And it falls upon the editor too to find a competent reviewer. And quite often, they both fail.
01:29:44
So reviewing is not the cure. I think that that is a very wise and interesting comment at the end of this panel discussion. I see you're standing up. I think we have the task to stay in time. I'm sorry to have gone one minute over time.
01:30:03
I'm very thankful for the active participation of the audience in the discussion and the questions that came up. You have to believe me. It's always more enlightening for the people sitting in the panel than for the audience. I learned a lot today. I found it very interesting. I want to thank all the panel members
01:30:22
for their participation in that. I thank you. And with that, I close and give the word back to you. Thank you.