We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

On the use of social networks as a prognostic market tool for improving the practice of evaluating scientific research

00:00

Formal Metadata

Title
On the use of social networks as a prognostic market tool for improving the practice of evaluating scientific research
Title of Series
Number of Parts
41
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Producer
Production Year2022
Production PlaceKyiv, Ukraine

Content Metadata

Subject Area
Genre
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Transcript: English(auto-generated)
Good afternoon, I'm from Ukraine. My name is Yuri Dubovenko and I would like to present you my notes on the use of social networks as a prediction marker tool for improving the practice of evaluating scientific research.
Centrometrics is a qualitative study of science, communication and science policy, according to Hess. This field has evolved over time from studying indices for improving information retrieval from peer-reviewed scientific publications,
to covering types of documents and information sources relating to science datasets, web pages and social media.
Centrometric indicators complement and contribute to effort to standardize, collect, report and analyze a wide range of science, technology and innovation activities by providing evidence on science and technology outcomes. Its tations matrix is divided into four main levels, article-level matrix, journal-level matrix, author-level matrix and alt-matrix.
It is convenient to apply automated statistical matrix to scientific outputs in order to evaluate the productivity of scientists and ways of economic support of the scientific industry.
Centrometric indicators were evolved from the start of introduction into scientific circulations. Here are H index, G index, HG index, E index, A index, R index, M index, etc.
When comparing researchers in different scientific fields, it needs to correct for differences between fields in publications, collaboration and citation practices.
Field-normalized indicators make corrections, but the design of these indicators is a challenge. Ukrainian scientists also got involved in this process, developing a fractional Shtoba index, which allows ranking the contributions of authors with the same H index.
Universal citation impact measures extend H index by rescaling citation counts and research and publications,
but each computation requires extension citation impact data and statistics for every discipline and year. The social crowdsourcing to a scholarometer evaluates the universality of the citation impact matrix, i.e. the field normalizes a H index.
Automated citation indexing allows millions of citations to be analyzed for large scale patterns. But, first, scholars engage in ethically questionable behavior to inflate the number
of article citations, and second, all scientific metrics are bound to be abused. When the economy feature is an indicator, it ceases to function as that indicator because people start to game out it. Therefore, next slide, clearly it's a guess how you can inexpensively increase your H index, for example.
When demand arises for quick metrics in the modern digital world, startups
are emerging quite quickly to automate the evaluation of academic achievement metrics. You can see one of them in the next slide called Shtoba. Another alternative proposition of giants of markets is the tool for
scientific metrics provided by the Organization for Economic Cooperation and Development. As you can see in the next slide, it is possible to estimate the total number of scientific publications by countries in a given branch, for example, in Earth Sciences.
Here, the number of Ukrainian publications is one and a half orders of magnitude behind the world industry leaders. The prediction markers predict the outcomes of the replications well and outperform a survey of market participants' individual forecasts.
They are a promising tool to assess the reproducibility of published scientific results. The prediction markers allow estimating probabilities for hypotheses being true at different
resting stages, which provide information regarding the temporal dynamics of scientific discovery. And here you can see decentralized prediction markers, companies such as Hypermount predict it, best fare, good judgment, metaculous, as well as decentralized companies such as Gasser, Gnosis, Ogre, etc.
And what about crowdsourcing scientific prediction marketing? Some outcomes you should track in psychology investigations aimed to overcome the replication crisis.
It means that research that can't be replicated hinder discoveries. Could an artificial intelligence powered tool change the incentives to benefit scientists? Scientific papers' areas increase, but scientific knowledge is not.
Any scientific results may not be a reliable falsification. A strong hypothesis, along with methods that test and refute these hypotheses, should be used by researchers. Testing a resulting hypothesis through a team science framework allows addressing the
heterogeneity in samples and or methods that make so many published findings tentative. It would be fantastic if test whether a prediction marker could be added to a review process, not replacing the three
to four reviewers who carefully read the paper, but as an addition to see how much that could improve the review. It is believed that the combination of multiple and individual evaluation compensates for the subjective biases
of experts and turns into the wisdom of crowds, allowing to derive a more accurate decision. This phenomenon refers to the behavioral economics and its essence is properly described in the note of Ariaboa.
Well, but if we need to collect some statistics for scientific metrics, then how do we measure the contribution of individual philosophers or scientific groups? In this occasion, the global player ORSID physically offered its own set of contributions to
scientific output, allowing them to be marked by the participants of the scientific process themselves. Now, we just came to our own problem, namely the nuances of distribution or contribution within groups and their fixation in scientific databases.
Therefore, we set the following weekly formalized problem in the form of survey. The founder researcher developed a theory. Basically, he solved an important scientific problem and published a unique article, when a group of disciples' followers applies the results of a founder in small applied directions
and writes articles referring to another and to a founder researcher who himself no longer writes. Therefore, the founder researcher has indexed a hash index 1 and its followers gained
a hash index much more. Who in this scientific group will have greater scientific authority? The initial condition of the problem are as follows. The author of the idea doesn't claim co-authorship and b. it is not necessary for him to be the chief.
A question arises here. Is it appropriate to include co-authorship even when he or she did not write anything in the text? The discussion showed a wide range of opinions but allowed to clarify and enrich
the wording. In the end, the main provisions came down to the following considerations. Others of article are not identical to persons who wrote the text of the article. Authorship equals idea plus measurements plus other impacts.
If the results of measurements are in the text, their performance are true co-authors. In the natural sciences, gaining results is often the main thing and the description is secondary. If the disabled follower contains scientific activity, he should publish. For the decision
-making and funding the projects or tenure, a hash index is rather true. But investors do not need the hash index. A venture entrepreneur will choose to support a
founder researcher who came up with the idea himself and not someone who only uses the idea. There are resources that can be seen in addition to articles, also patents,
paper prints, software and all about without human input as the exactly common resource. Founder researchers deserve preferences with reservations, not disabled followers. Real research is measured not only by
hash index, but other formal evaluation techniques, not just hash index, could have detected this feature. Among them are nomenational references, they could be, for example, Crowley
method, Jacobson organ, Newton formulae, volt-unit, Pearson law, Voronoi theorem, etc. But if external management arise and resources could be evaluated, how could you control why a founder researcher has such preferences?
A founder researcher is taken as co-authors and his hash index will increase. This may conflict with ethics integrity. Hash index is a measurement tool and has a purpose and sensitivity area, though the tool was not used for its purpose in this case.
Formal approaches are an inevitable evil in the context of impersonalistic control mechanism. There are many alternatives to hash index and new ones are emerging now.
An exam was proposed and the hash index addressed a number of publications that borrowed your ideas, data or material, but didn't cite your paper properly. The indexes are an evaluation tool and not the characteristic and mandatory criterion for decision-making.
Decision-making in complex hierarchies, which are scientific teams, requires explicit or clear criteria. Temptation of simple decisions is almost irresistible.
But collective discussion produced more qualified opinions that the author could have achieved alone. And this joint contribution is a valuable practice for evaluating the problem of Scientometrics. The practical value of this quiz presented to you is a completed formulation of the problem and clarification of its boundaries.
The benefit of using collective discussion for the foundation of problem reviews, writing the history of science and the development of Scientometrics is indisputable.
At all, we can summarize all this conclusion presented above with the help of the following quote of Bart Lennox opening sign. Modern information and communication technologies have changed scientific research. Exchange and discussion of
ideas and social networks for scientists, new media for collaboration, new publication formats etc. The criteria and spread of knowledge are becoming more transparent and accessible. Research processes will change even more in coming years due to the tools and visions that drive the current scientific revolution, often called Open Sciences.
Thank you for attention.