A community-driven approach towards open innovation for research comms
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 9 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/65910 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
FOSS Backstage 20203 / 9
4
00:00
VideokonferenzReelle ZahlSinusfunktionAggregatzustandSystemaufrufAusgleichsrechnungWurzel <Mathematik>SpezialrechnerQuellcodeKorrelationskoeffizientTopologieMarketinginformationssystemGoogolProzess <Informatik>ComputerspielSchreiben <Datenverarbeitung>VersionsverwaltungPeer-to-Peer-NetzInformationTypentheorieResultanteQuick-SortInternetworkingKontrast <Statistik>MultiplikationsoperatorAnalysisBildgebendes VerfahrenComputerspielMathematikPerspektiveTelekommunikationExpertensystemProgrammbibliothekMittelwertVideokonferenzProgrammierumgebungBildschirmmaskeBitHochdruckInverser LimesMomentenproblemPhysikalisches SystemPhysikalismusRandverteilungWinkelVersionsverwaltungMatchingTexteditorNichtlinearer OperatorRuhmasseProzess <Informatik>DatenfeldPay-TVSummierbarkeitPunktStrömungsrichtungArithmetische FolgePeer-to-Peer-NetzDruckspannungRichtungt-TestMailing-ListeAdressraumBitrateStreaming <Kommunikationstechnik>AutorisierungMinimalgradDigitale PhotographieSchreiben <Datenverarbeitung>ZweiMapping <Computergraphik>YouTubeEchtzeitsystemAnalogieschlussComputeranimation
07:36
YouTubeBildgebendes VerfahrenAnalogieschlussPhysikalismusInternetworkingt-TestFlächentheorie
08:25
TopologieArithmetisches MittelAtomarität <Informatik>MagnettrommelspeicherBitrateSpezialrechnerFormation <Mathematik>Stochastische DifferentialgleichungE-MailBenutzerfreundlichkeitAnwendungsspezifischer ProzessorTelekommunikationFunktion <Mathematik>VerschlingungBitrateMultiplikationsoperatorFigurierte ZahlSchnittmenge
09:10
TelekommunikationSystemplattformBenutzerfreundlichkeitDatenverwaltungSelbst organisierendes SystemQuick-SortExistenzsatzComputerspielTelekommunikationProzess <Informatik>Peer-to-Peer-NetzHilfesystemSystemplattformExogene VariableOffene MengeOffice-PaketComputeranimation
10:32
Abgeschlossene MengeOpen InnovationComputerspielTypentheorieFundamentalsatz der AlgebraQuick-SortRoutingOffene MengeKontrast <Statistik>Open InnovationComputeranimation
11:24
Offene MengeOpen InnovationVersionsverwaltungProzess <Informatik>ProgrammiergerätStetige FunktionOvalSystemplattformComputerspielZenonische ParadoxienTwitter <Softwareplattform>VideokonferenzBildgebendes VerfahrenCodeDatenstrukturMathematikTelekommunikationDeskriptive StatistikTypentheorieDatenanalyseMAPVideokonferenzMereologiePhysikalisches SystemRandverteilungResultanteQuick-SortNichtlinearer OperatorProzess <Informatik>Integriertes InformationssystemPunktOffene MengeGamecontrollerDefaultOpen InnovationComputerspielSoftwareMomentenproblemVersionsverwaltungDatenfeldHilfesystemVorzeichen <Mathematik>Computeranimation
15:56
TelekommunikationTelekommunikationBildverstehenQuick-SortOffene MengeInklusion <Mathematik>MomentenproblemMultiplikationsoperatorSoftwareentwicklerProdukt <Mathematik>ComputeranimationTechnische Zeichnung
16:51
Web SiteGeschlecht <Mathematik>Twitter <Softwareplattform>Selbst organisierendes SystemImplementierungTexteditorVisuelles SystemLineare RegressionAnalysisKraftVerschlingungVideokonferenzPaarvergleichInterface <Schaltung>SystemaufrufAppletSkriptspracheSchlussregelSpieltheorieWellenlehreArithmetische FolgeTopologieFunktion <Mathematik>SoftwaretestSurjektivitätDichte <Stochastik>IndexberechnungSichtenkonzeptZufallszahlenSISPCliquenweiteSoftwareentwicklerSoftwareQuellcodePerspektiveGebäude <Mathematik>UmwandlungsenthalpieMathematikComputerspielRelation <Informatik>Digitale PhotographieEreignishorizontMaßerweiterungLoginNormierter RaumCodeDigital Object IdentifierSchätzungComputervirusSimulationBeobachtungsstudieFrequenzZellularer AutomatVektorpotenzialQuick-SortOvalPrototypingProzess <Informatik>PunktOffene MengeEreignishorizontMaschinenschreibenMAPHochdruckMomentenproblemProjektive EbeneBildverstehenGüte der AnpassungComputervirusGeschlecht <Mathematik>Kartesische KoordinatenHilfesystemPunktwolkeOpen SourceMixed RealityFitnessfunktionSystemplattformTouchscreenMultiplikationsoperatorSoftwareentwicklerSchnittmengeComputeranimation
21:23
ProgrammierungQuick-SortPrototypingMultiplikationsoperatorBesprechung/Interview
22:01
ProgrammierungWellenpaketBitProjektive EbenePrototypingVarietät <Mathematik>Offene MengeSoftwareentwicklerComputeranimation
22:54
GruppenkeimRückkopplungCodeVersionsverwaltungService providerProdukt <Mathematik>Regulärer GraphBasis <Mathematik>RechenschieberÄhnlichkeitsgeometrieQuellcodeUmwandlungsenthalpieAuswahlaxiomEuler-WinkelBeobachtungsstudieProgrammierungGebäude <Mathematik>MereologieProjektive EbeneZeitzoneQuick-SortSystemaufrufPrototypingProzess <Informatik>Gemeinsamer SpeicherWort <Informatik>ZoomMapping <Computergraphik>Computeranimation
25:16
Produkt <Mathematik>Quick-SortGemeinsamer SpeicherInklusion <Mathematik>DifferenteGebäude <Mathematik>Computeranimation
25:54
SolitärspielOffene MengeProgrammiergerätProdukt <Mathematik>Dienst <Informatik>SystemplattformGemeinsamer SpeicherRechnernetzGoogolSoftwareentwicklerFramework <Informatik>Kopenhagener DeutungOpen InnovationDatenbankMereologieCOTSStellenringEinfach zusammenhängender RaumSondierungProdukt <Mathematik>ComputerarchitekturSoftwareGebäude <Mathematik>MAPMomentenproblemProjektive EbeneTermQuick-SortBildverstehenPrototypingEnergiedichteFeuchteleitungOffene MengeWort <Informatik>Kartesische KoordinatenOpen SourceDifferenteMultiplikationsoperatorComputerspielUnordnungTwitter <Softwareplattform>OSS <Rechnernetz>SoftwareentwicklerComputeranimation
29:22
TelekommunikationE-MailVerschlingungTwitter <Softwareplattform>HochdruckMultiplikationsoperatorComputeranimation
30:02
ComputermusikMathematikComputerarchitekturSelbst organisierendes SystemSimulationSoftwareTelekommunikationTypentheorieMAPComputersimulationGruppenoperationMereologieQuick-SortOvalUnternehmensmodellGüte der AnpassungCASE <Informatik>Prozess <Informatik>Ein-AusgabeOffene MengePeer-to-Peer-NetzInteraktives FernsehenDruckspannungVollständiger VerbandMaskierung <Informatik>DifferenteSystemplattformMultiplikationsoperatorURLSoftwareentwicklerDatenverarbeitungDatenverwaltungLeistungsbewertungFächer <Mathematik>Hierarchische StrukturRechter WinkelBesprechung/InterviewVorlesung/Konferenz
36:39
AutomorphismusBootenSinusfunktionComputerspielSoftwareFunktion <Mathematik>ProgrammierungExpertensystemTypentheorieProdukt <Mathematik>BitMereologieMetrisches SystemProjektive EbeneQuick-SortFlächeninhaltTeilbarkeitTexteditorProzess <Informatik>DatenfeldFormation <Mathematik>FeuchteleitungQuaderOffene MengeUmwandlungsenthalpiet-TestPunktwolkeOpen SourceMultiplikationsoperatorMinkowski-MetrikBenutzerbeteiligungSoftwareentwicklerMAPPeer-to-Peer-NetzMailing-ListeOrtsoperatorVorlesung/KonferenzBesprechung/Interview
Transkript: Englisch(automatisch erzeugt)
00:03
Yeah, starting with this quote, we have failed the internet. The internet was built by scientists for scientists so that they can share their results with more people faster and easier. The internet, I think all of you would agree, has changed the way that we communicate
00:21
a lot of information. So I'll give you some examples. And I'm going to spend some time elaborating them. So later, you'll see sort of the contrast that exists between other types of information and research. So for example, looking at how you can find out about the scores of a football match.
00:43
100 years ago, you'd probably have to wait until the match is over. You read the newspaper the next day. There'll be like a static photo, some text, and you get the score. Nowadays, I just found out when I was preparing for this, you actually get these sort of live streams on YouTube,
01:00
which is called 360 degrees video. So while the match is playing live, you could pan around the stadium and see whatever bit you want to see from whatever angle you want. So I think we can all agree that that's sort of a huge improvement from that static image. Another example.
01:22
So I just arrived at this new city, and I want to find out good restaurants to eat. 100 years ago, I'd be looking at a guidebook. There'll be like a limited list of restaurants. I'd be relying on the comments from this author of the book, and then there'll be an address, and I'll have to check another map and go there.
01:41
Nowadays, within a couple of seconds, you can use your GPS, and with the maps, you can get all the restaurants around you. You can get crowdsource reviews and ratings, real time. You can filter the results by your dietary requirements or preferences, and with another click,
02:03
you get instantaneous directions of the fastest way to get to that restaurant. And that's really, again, a big change. And what I'm about to tell you is the situation in how we communicate research. So first, to sort of step back,
02:23
you have to first understand how research is communicated nowadays. So let's say I'm a PhD student working in a lab. I do my research. I spend hours and hours slaving over my experiments, and then finally, after like three, four years, I have some results. I write them into the form of a manuscript.
02:42
I submit the manuscript to the journal. The journal then has these editors that reads the manuscript, sends it to what we call peer reviewers. So these are sort of experts in my research field who will be able to read my work critically and to give me comments, so they will come back to me with revisions.
03:00
So these are comments on how you can improve the work that I've done. So either I have to rewrite something, I have to do some analysis again, or some new experiments, and then send it back to the reviewers. And this happened a couple of times before we're happy that the work is worthwhile for publishing. So then it appears on sort of this journal,
03:21
which is now a website, used to be in print like a physical book. I'm gonna point out some problems with this process in a moment. So there are four main problems. The first problem is that it's extremely long. So the average time from research submission, so the writing and submitting,
03:42
not counting the time for the research, to actual publication on average at the moment is nine months. It doesn't sound long, but think about a very relevant situation at the moment. So the coronavirus that we've all been talking about. The outbreak was discovered, what was publicized in January.
04:02
So let's say for some miraculous reason, the research was done in one day, and the researchers wrote everything up in one day as well. And then they submitted to the journal. And on that average time, we would not have seen anything coming out
04:20
from the research at this point, and we would not see it for another four or five months. And that's a big problem, because it means that this process is stopping the communication of research in timely manner. And it's also a very, very stressful process for the people who are involved. So an average PhD student has about four years
04:41
to finish their research. Otherwise the grant money runs out or you cannot graduate, et cetera, et cetera. And to think that this, while on average it takes nine months for this submission process to go through, it can take anywhere between sort of nine months to two years, three years I've heard.
05:01
And in the current sort of research environment, a publication is crucial for their career progression. Without a publication, they cannot get to the next step, to the next job. And so this creates a lot of stress. It's called the publish or perish mentality,
05:21
which is very unhealthy. I used to be a researcher. That was the primary reason why I left. The third problem is that it's closed. What I mean by that is there's a lot of discourse during that peer review and revision process,
05:41
pretty much like if you have that discussion over a pull request on GitHub. That happens during that process, but in the end, the only thing the scientific community sees most of the time is the published paper. You never see that discussion that happened. You never understand why a paper was rejected or revised at any point.
06:01
You just see the results. So everything looks perfect on the paper because you don't see the mistakes or anything. And the worst thing is, sometimes you don't even see the publication because journals hold the copyright over the publication
06:24
and they impose these things called paywalls. So institutions and libraries have to pay a massive sum of subscription fees to journals to be able to see those, for their researchers to access those publications even if it's their own.
06:41
And imagine that if you are from an institute without much money, you're basically barred from that knowledge. And that leads me to my final point. This process is extremely expensive and it's extremely profitable. So one of the biggest publishing companies
07:02
is called Elsevier. Their operating margin last year was 37%. To put that in perspective, the operating profit margin for Google last year was 24%. And the scholarly publishing system profits so much because it's so profitable because you have to,
07:22
the journal doesn't pay the scientists to do the research. They don't pay the peer reviewers to do the reviewing. And they're getting money from scientists and institutions for reading the research and for publishing. And it's ridiculous. And just to sum it all up, I love this analogy. So it takes about nine months
07:42
to send a rover from Earth to Mars. And so what you can imagine if I'm a PhD student, I would print out my manuscript, attach it to this rover, get it sent to Mars, have it put the physical sheets of paper on the surface of Mars, use those camera thingies on the top of the rover
08:02
to take images of the paper, beam it back to Earth, and that would still be faster than publishing in a journal. It would be open because this is supposedly government funded and all the data that comes back from those cameras can be accessed on the internet. And it would be cheaper.
08:23
And so you would have thought that after all that money and time and effort, the way that the output of this must be stellar must be something amazing. And the thing is that it's not. So the research itself is good, but the way that we communicate research
08:43
hasn't changed for like 100 years. Okay, yeah, I have to admit that there's links that you can click now and some resources, data sets that you can download, for example. But it's not like we now have amazing interactive figures that you can spin around in 3D or open peer reviews
09:02
that you can look at the ratings of the reviewers. So there's a lot of room for research communication to be faster, to be more cost effective, to be more open, to be more user friendly. And that is sort of the reason why my organization exists.
09:21
So I'm the innovation community manager at eLife. This is our mission. It's on the wall of our office. We like to help scientists accelerate discovery by operating a platform for research communication that encourages and recognizes responsible behaviors in science.
09:42
We do this, we're funded by some of the biggest research funders in the world. So the Max Planck Institute, the Wellcome Trust, Howard Hughes Medical Institute, and the Wallenberg Foundation. We do this by publishing some of the best work in the life sciences,
10:01
working with an editorial community that allows us to do that, fully open access and online. We do that by working very closely with early career researchers to involve them in our governance, to involve them in the peer review process and to make sure that they can build
10:21
a research ecosystem that works for them. And we also do that by, I'm gonna introduce next, which is called the eLife Innovation Initiative. So I lead this initiative. It's a separately funded effort and our mission is to drive open innovation for open science. So the fundamental belief here is that
10:43
technology can change our behavior. And I don't think I need to convince a lot of people here that that's true. I mean, look at the way that we've been using our phones and Google Maps and other types of innovation have changed the ways that we've gone around doing day-to-day life. So our sort of fundamental assumption
11:02
and what we're hoping is that by developing tools that makes it easy to do the right things when you communicate research, it will help drive more people towards that route. So I've explained why innovation is the approach that we're taking, but why open innovation?
11:22
What is open innovation? So I sort of want to contrast two slightly different concepts here, which I think previous speakers have also touched on. Something called open by default and open by design. So I learned a lot of these concepts from Mozilla open leaders.
11:41
They're great and I think a lot of this talk is borrowed from various different places, so just bear with me. So open by default is, for example, if you sort of just put all your code online without any explanation, without any descriptions or ways to facilitate people's contributions.
12:01
It's chaotic, it's not easy to follow. Open by design is that it's very intentional, it's very strategic, and the idea is that it should be inclusive and open for revision. So if I'm a newcomer to the field, I should be able to find very quickly
12:20
and effectively how to contribute, how to get help, and how to recognize other people and the people in the contribution. So what we're striving for at eLife is open by design type open innovation. Why do we think this is important? So if we forget about the openness part
12:41
and we just say we want to make good tech that will change behavior, we can just ask the publishers who already know research communication super well and can easily build tools that will change behaviors and make people's lives easier. And in fact, they're already doing this. So this is Elsevier,
13:01
the 37% operating margin company that I mentioned earlier. This is their map of how their tools and APIs are linking together some of the information systems that are supporting research. The problem with this map is, other than everything, other than some of the bubbles on the outside,
13:22
everything in the middle is owned by Elsevier. And so we're basically, by looping researchers and trying to get them to use this, we're giving control to this company and our user data.
13:41
And are we sure that we want to hand over the chance to change research to a company that is making that much profit? I'm not entirely sure. And the other thing is that, the reason why research works this way at the moment,
14:05
there's a big underlying cultural problem. And the current system is facilitating it. And so by using the same actors to recreate the system, even with better technology,
14:21
those biases are gonna persist. And so, I don't know how many of you have seen this soap dispenser video. It's basically, the point I'm trying to make here is we need to think about who is owning our knowledge structure and who should own it.
14:42
We need to put users and everyone, a diverse community, behind the design of these tools so that we are not propagating biases that we don't want. And another sort of,
15:01
I think this generally applies to a lot of tools. Open tools. But probably more specific to research tools is the reason that if a research tool is not open, it leads to people not being able to reproduce that research. So the example is, let's say I take a certain image
15:21
of a tissue with a microscope. If I don't own that microscope, I don't have money to buy it, or I don't have access to it, then I can't take that image again. So I can't really reproduce or trust that result because I don't manage to do it myself. So this applies as well to software. If it's proprietary,
15:43
a lot of the processes and maybe data analysis or data collection cannot be reproduced. And that is an issue that's specific to research itself. So recap. Our vision is to create open, inclusive,
16:02
user-centric research communication tools together with the community. So how do we do that? I'm gonna sort of talk about two things that we're doing at the moment and constantly revising as we go through and learn from the community.
16:20
So one idea that we had at the very beginning in 2018 was, okay, we have the researchers who know this problem really well and are very passionate about it, but at the same time, they may not necessarily have those tool development or software development skills. We need to put them together in a room and give them time with people
16:43
who know how to develop products, how to code, how to design, et cetera. And this gave birth to the eLife Innovation Sprint, which took place in Cambridge, UK. We brought 60 people together for two days
17:05
and let them work on prototypes. So going back to my point about open by design, the goal of this is specifically to create prototypes that are open, inclusive, and user-centric. We needed to build processes that would facilitate that.
17:22
And so the whole sort of promotion of this event and the application process is very deliberately designed so that we could balance out the people who get to get there. This was last year, 2019. We had people coming from 16 countries.
17:41
We spent a lot of time getting sponsors to try and get them to pay for people to come from all over the world. We had a gender quota to make sure that we achieved sort of gender goals. And yeah, we had a good mix of researchers and everyone.
18:03
We only had two days, so we need to get them to know each other super fast. So we had lightning talks at the beginning. We had stickers to help them identify each other. We had a document before the actual sprint where people pitched their ideas
18:21
and get a discourse going around it. We also used the document to ask people if they need data sets for them to create those prototypes, and we load those data sets into our AWS cloud beforehand. We had a Slack channel during the event which we were projecting on the screen
18:40
so that people can see what projects needs help and where projects are. We provided everyone with sort of a lean canvas to give them an idea of how they could potentially structure thinking about solution problem user fit for their specific projects. And then there's as any design sprints,
19:05
not short of post-its. So yeah, so that's sort of like, I'm sharing these thoughts. We learned a lot of those from the other community events and I think we're not unique, but I hope that if you happen to be organizing
19:21
a hackathon or a sprint, you will be able to take some of our ideas too. So what did we create? We really realized that the best ideas come from the community. I am not gonna go through every one of them. I'm just gonna highlight one particular one that is very relevant at the moment.
19:42
But yeah, do check them out. They are prototypes. A lot of them got further funding because of the prototypes that they created at this hackathon. They were able to demonstrate to potential funders how, what their visions are, and that they have put in sufficient work
20:01
to develop a roadmap and a wireframe. And yeah, a lot of them are still in development. So the one that I want to highlight is pre-review. It's a crowdsourced reviewing platform for preprints which are basically un-peer reviewed manuscripts. So this was the coronavirus problem that I was talking about.
20:21
Preprints allow these papers to get out before they are peer reviewed. So when I'm ready to submit, I can preprint my paper and that instantaneously gets shared with the world. So pre-review is recently partnered with another open source project called outbreak science.
20:42
And basically this particular project came, I mean, they literally were so timely. It was that they partnered and then one month later there was the coronavirus. And so they really need help now. So this is a platform that they have developed to get people to try and review some of the preprints
21:00
coming out from coronavirus research. And so if you do know any scientists who can help fill out 10 easy questions on how valid the research is and how good it is, please direct them there. And they also are hiring a software development team to try and help them out with platform development. So if you're interested, just let me know. I can put you in touch.
21:25
Okay, that's all great. We developed a lot of prototypes but everyone knows the problem with hackathons. The prototypes, most of them sort of just end there because people don't have time, because people, even if one person is super passionate,
21:43
they don't have the skills maybe, they don't have time. And then it just sort of never gets followed up on. So I sort of spotted this problem from the sprint last year and I was like, is there anything I can do about that? How can we really empower people to develop sustainable prototypes? And this led to the birth of another program
22:04
called eLife Innovation Leaders. It's a 14-week open leadership and mentorship training program for developers developing open prototypes for research. It's ongoing. It started in February, so it will end in May.
22:22
We have 27 participants coming from all the way from, I think it's Melbourne, Australia, to the west coast of the US. It's all online. What we do is we pair them one-to-one with a mentor
22:40
and we also built a curriculum that covers a wide variety of projects that are needed to empower people to create those sustainable prototypes. So I'm just gonna go into that a bit more. So you start with sort of talking about
23:03
some of the things that we talked about today, value exchanges within the community. Why are you equipped as an open leader? You move on to road mapping and understanding users and understanding problems and prototyping processes. And finally, towards the end, we'll probably talk about sustaining the project
23:22
beyond the initial prototype and getting that funding and marketing efforts together. We do cohort calls every single week because of the time zone has to do two calls every single week and it's a lot of fun.
23:42
But we really learn a lot. So the whole pedagogy or the whole idea of learning behind this program is that there's a lot of discussions. So everyone sort of just goes into breakout rooms. This is all done by Zoom. So you can sort people into rooms and then they can have discussions and then they report back and you write on the Google Doc
24:01
what you've talked about and what you find interesting. The amazing thing I find is that after a couple of weeks, you thought like, okay, it's just discussion and nothing concrete is coming out. But the sharing of experience and the fact that you're putting words down on paper essentially means you're creating knowledge on the go.
24:22
So we find that people are going back to these documents when they're actually working on maybe building user personas or roadmaps and looking at what other people are saying and then using that as a reference. And so it just works. It's pretty empowering because once you realize
24:41
that you're actually in this part of the process, you realize that you may have the capability to create something. Yeah, it's just a longer explanation of how we make sure that everyone can learn from each other and apply their knowledge by completing some assignments.
25:00
Yeah, so all of this, we're learning ourselves as well as we're building this program from our participants and what they want. And I think the flexibility that we've designed into the program is key for its growth and for the participants' growth. So two ongoing things.
25:21
What's next? Here I'm sharing sort of some thoughts that may be interesting to folks who are thinking about developing communities and yeah, how to do it. What are we trying to achieve basically? Because this is not your normal community. You realize that is a bit different. We're not trying to get everyone to use a certain product.
25:41
We're building what we're calling a community of practice which essentially means we're united by the goal, a shared goal of making research open and inclusive. This was when we asked people on the survey from the sprint about what was the most valuable outcome
26:02
when they came to the sprint. And unsurprisingly and surprisingly, because we were so focused on wanting those products, but what people actually value were other people. So the connections were way more important than what they actually created. Again, borrowing this from Mozilla,
26:20
I don't know how many of you have seen this. It's really shaped the way, the mountain of engagement has really shaped the way that I thought about things. So there's a sort of hit word at the moment on Twitter, but it's called the architecture of participation. So how do you build an architecture which allows different people
26:40
coming in with different expertise and skills to be able to participate in the project? So what you want to think about is how they would first discover your project and then how they would build up their initial interest to go step over that barrier of the first comment or the first issue or the first commit
27:03
through to sustaining that participation and energy and then finally having them being able to take up that leadership role. So the way that, at least for my vision for eLife Innovation is that we support them through the different stages of their project building.
27:22
So they will maybe join at the sprint or when they follow us on Twitter, they will see the ideas of other people, they get interested, they will discover that there's an ecosystem out there trying to change things. They come to the sprint, they will contribute their little ideas and realize that it actually can become a prototype.
27:41
Then they may say, okay, maybe I can actually lead this project now, but what do I do? Then they will join us at Innovation Leaders and we will give them those skills and values, those skills and connections that they may need to be able to launch the V1 or the MVP of that particular product.
28:00
So we're sort of there at the moment and haven't gone through the rest of it yet. So this is the next challenge. It's to think about how do we reward people beyond just them learning from each other and the networks that they build? Is there a way to tie what they are doing?
28:22
So researchers, if they're contributing to open research software development, is there a way to give them credit in terms of their career? Can we capture how much those software are being used so that that could somehow be reflected in whatever grant application they're making next?
28:42
I know this all sounds like I really love that to happen and I'm very, very glad that there are communities of people who are sort of supporting us and we're learning from each other in that way. So if you don't know already, check them out. Sustain OSS, working on sustainability
29:01
of open source software and Chaos, working on measuring the health of open source software communities. So yeah, just learning and interacting with the community and hoping to learn from the bigger open source community because you've existed for a longer time and I'm sure that there are lessons that we can take from there as well.
29:24
So thank you for listening to this very long whine about research and communication but I hope you get a little review of what the problem is and what a potential solution may be. If you'd like to join us at the next sprint, we are about to go out with an announcement,
29:41
not now, in about two weeks time. So that's the link to stay updated. Same for innovation leaders, but later in the year probably and I'm on Twitter and email if you have any questions beyond now but I'm happy to take any questions now. Thank you.
30:13
Hi, thanks for your talk. You talked about research software and often now in the examples of stuff,
30:23
it was software which you need to, yeah, like peer review or something. So it would be, for me it's like software you need to run the concept or the whole process of research in general but often research software is also like simulation tools and stuff like that.
30:41
My question now is are you interested more for the software which supports research in the research process or also research software which supports research itself, like simulations and whatever? Yeah, that's a really good question, thanks. Our primary interest was in research communication software.
31:02
So tools that would change the way that we share, discover, evaluate and consume research. That being said, when we open up, for example, for the sprint and also for innovation leaders, we realized that a lot of people actually wanted to use those locations to develop open research tools. So very tied to data processing, for example,
31:22
very specific to a particular part of the research pipeline. And we say, why not? Because they benefit from the same types of interactions and architectures of participation and expertise. And so we've not say no and I love personally to support that community of research software developers
31:42
which is really growing. And I think it's really important that they get recognized for the work that they do, which isn't happening now. Yeah, a short comment to that. Yeah, exactly, there's a whole community RCE it's called and we also have that one in Germany here. I'm hoping running that also here in Berlin.
32:02
So yeah, if somebody's interested, sorry. Yeah, no, I completely endorse that. So if someone is interested, RCE communities are growing, they're also in the UK. I think the German conference is August 25th, if I'm not wrong. Yeah, something like that. And then there's also the UK one.
32:22
So this growing community is recognizing that we need software for research and hence we need people to develop those softwares and it's better if they're done by people who know how to develop this software. Yeah, and they have overlapping goals like open sourcing stuff but also making the whole process faster and more transparent.
32:40
So I think it's really a good join. Yeah, exactly, thank you. Yeah, great talk. Kind of a slightly sarcastic comment but I'm guessing Elsevier must not be a big fan of you. Nope. We're friendly, well, ish.
33:01
I mean they, like, yeah. This threatens their whole business model. How are they dealing with that? Are they, I don't know, what are they doing? What do you mean how they're dealing with it? Well, I don't know, if I were a big company making lots of money and a group of people starts to do something that undermines my pretty much whole business and certainly my 40% in saying, you know,
33:23
monopoly profits, I'd probably hire some lawyers and start looking at how to, I don't know, get rid of those people somehow. Yeah, are you saying that I'm gonna get sued after this? I mean, I love to think that they are scared
33:41
of the things that are coming. Unfortunately, that's not really the case. They still have a huge buy-in from the research community. You think that after all the stress and all the turmoil of, you know, researchers going through this publication process, nobody will want to publish with Elsevier ever again. That's completely not true.
34:00
Researchers, because academic prestige is everything and they've done such a good job at establishing themselves at the top of the hierarchy, researchers would kill to get a paper in Elsevier journals. It's a sad reality and that's why, you know, technology, as much as I love it
34:21
to change all the behaviors, it's sort of the facilitation platform for all the cultural changes that can happen on top. It just needs to be sort of, yeah, we are doing a lot of sort of community work and understanding what can drive people to get rid of that, escape from that status quo
34:41
and do something different despite the fact that, you know, you may not be able to see the return of that in a long time and it's, again, yeah, I'd really love for Elsevier to think that, you know, we're a threat but in the end, we are really an egg hitting a wall. So we need more people in this, basically.
35:08
Actually, that was a very good way for me to bring my question, which was basically, okay, so yes, I think most people would acknowledge what you say about Elsevier and, you know,
35:21
organizations like Elsevier, let's not make it specific but there is a certain, besides, you know, being the gatekeepers, they actually got to be the gatekeepers for a reason, which was, you know, there is a kind of value in peer review, which is basically, you know, you get qualified people to give you input and so on and so forth.
35:41
So my question is, so how can you attract qualified people to contribute in your platform and how can you vet them? That's a really good question. I agree with you 100% that peer review has value. My question is always, why does it have to be closed?
36:02
Why can't this be in the open and why can't more people participate in it? So think reviewing a restaurant on Google, right? You're not only asking foodies to do that, you're asking everyone to do that and there are gonna be spammers that are gonna be people who are paid to do the reviews but in one way or another, we sort of manage
36:23
and can the same be applied to research? Maybe, maybe not, it is an experiment. We are, I think different communities are sort of coming to settling on different levels of sort of vetting or accreditation for editorial communities
36:41
besides editors themselves. Ultimately, I can tell you because research is so specific to a specific field, if I'm a PhD student in looking at this particular part of the brain in this particular animal, I'm the expert in that and not that many other people are
37:00
and so even if I'm junior in my research career, I actually know quite a lot about what I know specifically probably more than someone who is working on a different area of the brain in a different animal. So the question that we can never settle on is what is a qualified reviewer and then if we cannot settle on that,
37:23
does it make sense to then make a barrier to stop other people or the wider research community from participating in this peer review process? I don't see, I personally don't see it. I don't think the barrier is fair.
37:44
My PhD program, I need to publish in ISI or Scopus papers. How do you relate to these? So can you explain a bit? Yeah, these are kind of lists of journals that are ever high, what do you call it,
38:06
a high value or very top level. So yeah, but you know what I'm, you know these, so how, are you listed there or? We have an impact factor. We don't like to talk about it
38:20
but you can always Google it. In fact, if you type eLife into Google, it automatically gives you the impact factor which again is why tools can really encourage bad behaviors while we should really be moving away from impact factors. Yeah, so I think the problem is that you have to start somewhere where people trust you.
38:42
So people need to believe that we're serious and it's helped by the fact that we're backed by serious people and we publish high quality research which the community generally agrees that it's high quality. So this is all sort of vague but it's true. And then we can start changing things from there.
39:00
The problem is that if you start from a grassroots project and you don't have that trust, nobody's gonna take you seriously. I mean it's just a sad fact in sort of the research community and beyond. Yeah, so it's still very important. We recognize the value of our editorial community.
39:21
The editors that we have are all scientists and they're all very well-regarded in their field so people believe what they say are good papers. But again, going back to that question, I think this should change because the fact that someone... If you know the peer review process, you will know that most of the time it's sort of just editors asking people they know
39:43
to review those papers. And it's all based on this nebulous cloud of knowledge in the editor's head which then it becomes extremely subjective. And so there's definitely technology that we can use especially with the web and all the researchers profile, most researchers profile being available on the web.
40:02
The technology that we can use to make that a bit fairer and more inclusive and more open, I guess. We're working on that as well. So yeah, it's a tricky question but the short answer is we're starting with sort of
40:20
the old ways of establishing trust but we're hoping to change that using that trust. Any more questions? You're operating kind of at the intersection
40:41
of science and research, product development but also just open source, open communities. And I'm just curious to hear what are some maybe unexpected challenges you ran into in fusing these communities or surprises or maybe positive stories that you would like to share? Thanks. Oh, positive stories.
41:02
Yeah, we realized that people learn a lot when they work on intersections. So academics don't, or the academic developers, they don't necessarily have those design or product backgrounds. And so whenever we put them together, it's sort of just, so I remember the first time
41:21
I heard about design thinking and this echoes with a few other people that I've worked with as well. Why didn't I approach problems like this before? It's just a new sort of realization. And then yeah, moving from an academic background myself into a sort of more product and software oriented company.
41:41
There are things that you then immediately realize that if only the academia took out some of these learnings from the other communities, they would have done so much better. Rather than using papers to measure their output, you would use other outcomes that you could define from the very beginning and you would learn how to time box your projects
42:02
and things like that. And so yeah, so there was another thing that I wanted to say. Yeah, so the other thing is that we also realized that because researchers are so prone to, like we don't like to change.
42:21
So it's really about making it super easy and sometimes the metrics that you measure your UX efforts against are not the immediate outputs but rather the long-term outcomes of what happens at the end. So this is like sort of an ongoing discussion that I was literally just having yesterday.
42:40
But yeah, it's sort of realizing what's special about research software or open source research software and being able to transfer some of those knowledge from the broader software development open source space and make sure that we can be informed as we develop these products.
43:10
Thank you.