We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Poster presentation: How to support early career researchers with identifying trustworthy academic events?

00:00

Formal Metadata

Title
Poster presentation: How to support early career researchers with identifying trustworthy academic events?
Title of Series
Number of Parts
20
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Early career researchers, especially when lacking good support structures, can have difficulties identifying academic events like conferences that are of questionable integrity („predatory conferences“). In the ConfIDent project we aim to build a digital platform where researchers can inform themselves about academic events and get support with assessing an event’s trustworthiness. During the project we explored different strategies to evaluate an event’s trustworthiness, means of conveying this evaluation to the users of the platform and helping users to make their own judgements. This poster presentation will expand on those different strategies, discuss the currently preferred solution and highlight challenges.
Computer animation
Transcript: English(auto-generated)
And with that, let's just see if we're ready for Julian Franken. Julian, hello, how are you? Hi, I'm fine, thank you. Excellent, yes, please remember to share your screen. And once you're ready with that, we can begin with your presentation. Yes, okay, are you seeing my screen now?
Yes, yes, we do, yes. All right, hello everybody, my name is Julian. I'm from the TIB, Leibniz Information Center for Science and Technology in Hanover. My poster is titled, How to Support Early Career Researchers with Identifying Trustworthy Academic Events. I'm working in a DFG funded project called Confident,
and we are about to, or we're supposed to build a platform to support researchers with finding the right academic events like conferences, like this one actually. And we are supposed to support them avoiding the wrong ones, so predatory or fraudulent ones. Recently, there was a report published
about predatory practices, so about predatory journals and predatory conferences in particular by the Interacademy Partnership. One of the main insights, core insights, I would say, is that predatory conferences or conference quality is best seen as a spectrum,
as you can see here on the top, I copied this infographic from the report. And typical markers are here also put to illustrate how a predatory conference, for example, looks like. And some of those typical markers for predatory ones
are, for example, that in extreme cases they don't take place at all, they can be a complete scam, or they only pretend to have a peer review process but actually don't. So Confident wants to keep those conferences and events out of its database. As you can see here on the top, I marked those types.
So in the infographic below here, I tried to illustrate how the rough process looks, how we intend to keep those out. So as you can see here first, a new event is entered,
and then we want to conduct some automated checks. For example, if an organizer is on the blacklist, we want to keep this event out. Then if it is not on the blacklist, some markers that already can be found here and can be identified by a machine, those are highlighted and then sent into the manual check
so that a professional can look into those. Again, if those check is passed, then they enter into the Confident database. And after that, we hope to involve the community as best as we can and give them the opportunity to flag events that they deem predatory
or they deem not trustworthy, and those will be reviewed again. But in the end, the most important part of this process is still the own scrutiny of researchers. So we can only be as good as our data probably is, and we still have to rely, or the researchers themselves still have to rely
on their own scrutiny and do their own investigation and research. We hope to inform them as best as we can. And during this process, as you can see on the bottom, I try to mark that certainty of the quality increases during this process, but there are always some challenges with this. For example, the first is what if the enter data
is false in the beginning? So if somebody simply lied about anything about the event. Next, for example, and one example for that is if an organizer is mentioned that is actually not really involved in the organization of an event, that would be a problem too.
This is one of the major challenges. In general, this addresses the issue of how to codify these markers at all. So most of those markers that are mentioned in the report actually can only be checked by a thorough investigation of the researchers themselves by looking at the website, for example,
and really digging deep into the conference. And we trying to get this into a database is a challenge. Could you please wrap up your thought, Julian? Last sentence, please. Okay, and that's, yeah, basically all of the challenges we will probably encounter and thanks for your attention.
Super, thank you very much, Julian. And of course, just a reminder, when we're done here with our short presentations, you'll have a chance to go and actually go into the virtual rooms, meet the speakers, and continue to talk with them one-on-one. So we're looking forward to that as well. Good, thank you very much, Julian. Appreciate it.