Based on SS categories of citations intent, 2626 were background citations (29.08%, providing historical context, justification of importance, and/or additional information related to the cited paper), 358 were result citations (3.97%, that extend on findings from research that was previously conducted), and 263 were method citations (2.91%, that use the previously established procedures or experiments to determine whether the results are consistent with findings in related studies). Results: Only 4.57% (413/9029) of the total SS citations compiled in this study were classified as SS HICs. A total of 618 publications comprised the final dataset. Altmetric Explorer was selected as an altmetrics harvesting tool, while Semantic Scholar was used to extract details related to HICs. Methods: Dimensions was used to generate the dataset for this study, which included COVID-19-related scholarly articles published by researchers affiliated to Jordanian institutions. Aims: The current study aimed to explore whether SS HICs provide an added value when it comes to measuring research impact compared to total citation counts and Altmetric Attention Score (AAS). Altmetrics are measures of online attention to research mined from activity in online tools and environments. Citations are considered highly influential according to SS when the cited publication has a significant impact on the citing publication (i.e., the citer uses or extends the cited work). Semantic Scholar (SS) is an artificial intelligence-based database that allegedly identifies influential citations defined as “Highly Influential Citations” (HICs). Therefore, the quest for meticulous and refined measures to evaluate publications’ impact is warranted. In sum, rhetorical citing helps deconcentrate attention and makes it easier to displace incumbent ideas, so whether it is indeed undesirable depends on the metrics used to judge desirability.īackground: The evaluation of scholarly articles’ impact has been heavily based on the citation metrics despite the limitations of this approach. Increasing the size of reference lists, often seen as an undesirable trend, amplifies the effects. This occurs because rhetorical citing redistributes some citations from a stable set of elite-quality papers to a more dynamic set with high-to-moderate quality and high rhetorical value. By turning rhetorical citing on-and-off, we find that rhetorical citing increases the correlation between quality and citations, increases citation churn, and reduces citation inequality. Next, agents fill any remaining slots in the reference lists with papers that support their claims, regardless of whether they were actually influential. Agents first select papers to read based on their expected quality, read them and observe their actual quality, become influenced by those that are sufficiently good, and substantively cite them. We develop a novel agent-based model in which agents cite substantively and rhetorically. We argue that manding substantive citing may have underappreciated consequences on the allocation of attention and dynamism. Intuitively, a world where authors cite only substantively appears attractive. The scientific community generally discourages authors of research papers from citing papers that did not influence them because such "rhetorical" citations are assumed to degrade the literature and incentives for good work. Here we describe how scite works and how it can be used to further research and research evaluation. Scite has been developed by analyzing over 25 million full-text scientific articles and currently has a database of more than 880 million classified citation statements. Scite shows how a citation was used by displaying the surrounding textual context from the citing paper and a classification from our deep learning model that indicates whether the statement provides supporting or contrasting evidence for a referenced work, or simply mentions it. To solve this problem, we have used machine learning, traditional document ingestion methods, and a network of researchers to develop a “smart citation index” called scite, which categorizes citations based on context. The usage of citations in research evaluation without consideration of context can be problematic, because a citation that presents contrasting evidence to a paper is treated the same as a citation that presents supporting evidence. Citation indices help measure the interconnections between scientific papers but fall short because they fail to communicate contextual information about a citation. Citation indices are tools used by the academic community for research and research evaluation which aggregate scientific literature output and measure impact by collating citation counts.
0 Comments
Leave a Reply. |