kth.sePublications
Change search
Link to record
Permanent link

Direct link
Wang, Qi
Publications (10 of 11) Show all publications
Wang, Q. & Jeppsson, T. (2022). Identifying benchmark units for research management and evaluation. Scientometrics, 127(12), 7557-7574
Open this publication in new window or tab >>Identifying benchmark units for research management and evaluation
2022 (English)In: Scientometrics, ISSN 0138-9130, E-ISSN 1588-2861, Vol. 127, no 12, p. 7557-7574Article in journal (Refereed) Published
Abstract [en]

While normalized bibliometric indicators are expected to resolve the subject-field differences between organizations in research evaluations, the identification of reference organizations working on similar researchtopics is still of importance. Research organizations, policymakers and research funders tend to use benchmark units as points of comparison for a certain research unit in order to understand and monitor its development and performance. In addition, benchmarkorganizations can also be used to pinpoint potential collaboration partners or competitors. Therefore, methods for identifying benchmark research units are of practical significance. Even so, few studies have further explored this problem. This study aims to propose a bibliometric approach for the identification of benchmark units. We define an appropriate benchmark as a well-connected research environment, in which researchers investigate similar topics and publish a similar number of publications compared to a given research organization during the same period. Four essential attributes for the evaluation of benchmarks are research topics, output, connectedness, and scientific impact. We apply this strategy to two research organizations in Sweden and examine the effectiveness of the proposed method. Identified benchmark units are evaluated by examining the research similarity and the robustness of various measures of connectivity.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2022
National Category
Other Social Sciences Information Studies
Identifiers
urn:nbn:se:kth:diva-324275 (URN)10.1007/s11192-022-04413-7 (DOI)000812913300002 ()2-s2.0-85132324076 (Scopus ID)
Note

QC 20230807

Available from: 2023-02-24 Created: 2023-02-24 Last updated: 2023-08-07Bibliographically approved
Wang, Q. & Jeppsson, T. (2021). A bibliometric strategy for identifying benchmark research units. In: Glanzel, W Heeffer, S Chi, PS Rousseau, R (Ed.), 18th International Conference on Scientometrics and Informetrics” (ISSI2021): . Paper presented at 18th International Conference on Scientometrics and Informetrics (ISSI), JUL 12-15, 2021, KU Leuven, ELECTR NETWORK (pp. 1229-1234). INT SOC SCIENTOMETRICS & INFORMETRICS-ISSI
Open this publication in new window or tab >>A bibliometric strategy for identifying benchmark research units
2021 (English)In: 18th International Conference on Scientometrics and Informetrics” (ISSI2021) / [ed] Glanzel, W Heeffer, S Chi, PS Rousseau, R, INT SOC SCIENTOMETRICS & INFORMETRICS-ISSI , 2021, p. 1229-1234Conference paper, Published paper (Refereed)
Abstract [en]

While normalized bibliometric indicators are expected to resolve the subject-field differences between organizations in research evaluations, size still matters. Furthermore, research organizations, policymakers and research funding providers tend to use benchmark units as points of comparison for a given research center in order to understand and monitor its development and performance. In addition to monitoring and evaluations, the identification of comparable benchmark organizations can also be used to pinpoint potential collaboration partners or competitors. Therefore, methods to identify benchmark research units are of practical significance. However, few studies have investigated this problem. This study aims to propose a bibliometric method to identify benchmarks. We define a benchmark as a well-connected research environment, in which researchers work on similar topics and publish a similar number of publications compared to a given research center during the same period. Three essential attributes for the evaluation of benchmarks are research topics, output, and coherence. We apply this strategy to a Swedish research center, and examine the effectiveness of the method.

Place, publisher, year, edition, pages
INT SOC SCIENTOMETRICS & INFORMETRICS-ISSI, 2021
Series
Proceedings of the International Conference on Scientometrics and Informetrics, ISSN 2175-1935
National Category
Information Studies
Identifiers
urn:nbn:se:kth:diva-304781 (URN)000709638700135 ()2-s2.0-85112622329 (Scopus ID)
Conference
18th International Conference on Scientometrics and Informetrics (ISSI), JUL 12-15, 2021, KU Leuven, ELECTR NETWORK
Funder
Vinnova
Note

Part of proceedings: 978-90-803282-2-8

QC 20230612

Available from: 2021-11-19 Created: 2021-11-19 Last updated: 2024-03-18Bibliographically approved
Wang, Q. & Schneider, J. W. (2020). Consistency and validity of interdisciplinarity measures. Quantitative Science Studies, 1(1), 239-263
Open this publication in new window or tab >>Consistency and validity of interdisciplinarity measures
2020 (English)In: Quantitative Science Studies, ISSN 2641-3337, Vol. 1, no 1, p. 239-263Article in journal (Refereed) Published
Abstract [en]

Measuring interdisciplinarity is a pertinent but challenging issue in quantitative studies of science. There seems to be a consensus in the literature that the concept of interdisciplinarity is multifaceted and ambiguous. Unsurprisingly, various different measures of interdisciplinarity have been proposed. However, few studies have thoroughly examined the validity and relations between these measures. In this study, we present a systematic review of these interdisciplinarity measures and explore their inherent relations. We examine these measures in relation to the Web of Science journal subject categories. Our results corroborate recent claims that the current measurements of interdisciplinarity in science studies are both confusing and unsatisfying. We find surprisingly deviant results when comparing measures that supposedly should capture similar features or dimensions of the concept of interdisciplinarity. We therefore argue that the current measurements of interdisciplinarity should be interpreted with much caution in science and evaluation studies, or in relation to science policies. We also question the validity of current measures and argue that we do not need more of the same, but rather something different in order to be able to measure the multidimensional and complex construct of interdisciplinarity.

Place, publisher, year, edition, pages
MIT Press - Journals, 2020
Keywords
interdisciplinary research, interdisciplinarity, measures, consistency, validity
National Category
Social Sciences
Identifiers
urn:nbn:se:kth:diva-302021 (URN)10.1162/qss_a_00011 (DOI)000691837400012 ()2-s2.0-85117786758 (Scopus ID)
Note

QC 20210916

Not duplicate with DiVA: 1341353

Available from: 2021-09-16 Created: 2021-09-16 Last updated: 2022-06-25Bibliographically approved
Wang, Q. (2019). Consistency and validity of interdisciplinarity measures.
Open this publication in new window or tab >>Consistency and validity of interdisciplinarity measures
2019 (English)Data set
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-255705 (URN)
Note

QC 20190821

Available from: 2019-08-08 Created: 2019-08-08 Last updated: 2024-03-18Bibliographically approved
Wang, Q. (2018). A Bibliometric Model for Identifying Emerging Research Topics. Journal of the Association for Information Science and Technology, 69(2), 290-304
Open this publication in new window or tab >>A Bibliometric Model for Identifying Emerging Research Topics
2018 (English)In: Journal of the Association for Information Science and Technology, ISSN 2330-1635, E-ISSN 2330-1643, Vol. 69, no 2, p. 290-304Article in journal (Refereed) Published
Abstract [en]

Detecting emerging research topics is essential, not only for research agencies but also for individual researchers. Previous studies have created various bibliographic indicators for the identification of emerging research topics. However, as indicated by Rotolo et al. (Research Policy 44, 1827-1843, ), the most serious problems are the lack of an acknowledged definition of emergence and incomplete elaboration of the linkages between the definitions that are used and the indicators that are created. With these issues in mind, this study first adjusts the definition of an emerging technology that Rotolo et al. (2015) have proposed to accommodate the analysis. Next, a set of criteria for the identification of emerging topics is proposed according to the adjusted definition and attributes of emergence. Using two sets of parameter values, several emerging research topics are identified. Finally, evaluation tests are conducted by demonstration of the proposed approach and comparison with previous studies. The strength of the present methodology lies in the fact that it is fully transparent, straightforward, and flexible.

Place, publisher, year, edition, pages
WILEY, 2018
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-221924 (URN)10.1002/asi.23930 (DOI)000419516900009 ()2-s2.0-85044986249 (Scopus ID)
Note

QC 20180131

Available from: 2018-01-31 Created: 2018-01-31 Last updated: 2024-03-18Bibliographically approved
Wang, Q. & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10(2), 347-364
Open this publication in new window or tab >>Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus
2016 (English)In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, no 2, p. 347-364Article in journal (Refereed) Published
Abstract [en]

Journal classification systems play an important role in bibliometric analyses. The two most important bibliographic databases, Web of Science and Scopus, each provide a journal classification system. However, no study has systematically investigated the accuracy of these classification systems. To examine and compare the accuracy of journal classification systems, we define two criteria on the basis of direct citation relations between journals and categories. We use Criterion I to select journals that have weak connections with their assigned categories, and we use Criterion II to identify journals that are not assigned to categories with which they have strong connections. If a journal satisfies either of the two criteria, we conclude that its assignment to categories may be questionable. Accordingly, we identify all journals with questionable classifications in Web of Science and Scopus. Furthermore, we perform a more in-depth analysis for the field of Library and Information Science to assess whether our proposed criteria are appropriate and whether they yield meaningful results. It turns out that according to our citation-based criteria Web of Science performs significantly better than Scopus in terms of the accuracy of its journal classification system.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
Bibliographic database, Citation analysis, Journal classification system, Scopus, Web of Science
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-186983 (URN)10.1016/j.joi.2016.02.003 (DOI)000377413800003 ()2-s2.0-84959513043 (Scopus ID)
Note

QC 20160524

Available from: 2016-05-24 Created: 2016-05-16 Last updated: 2024-03-18Bibliographically approved
Wang, Q. (2016). Studies in the Dynamics of Science: Exploring emergence, classification, and interdisciplinarity. (Doctoral dissertation). Stockholm, Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>Studies in the Dynamics of Science: Exploring emergence, classification, and interdisciplinarity
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The dynamic nature of science is embodied in the growth of knowledge in magnitude and the transformation of knowledge in structure. More specifically, the growth in magnitude is indicated by a sharp increase in the number of scientific publications in recent decades. The transformation of knowledge occurs as the boundaries of scientific disciplines become increasingly less distinct, resulting in a complicated situation wherein disciplines and interdisciplinary research topics coexist and co-evolve. Knowledge production in such a context creates challenges for the measurement of science.This thesisaims to develop more flexible bibliometric methodologies in order to address some of the challenges to measuring science effectively. To be specific, this thesis1) proposes a new approach for identifying emerging research topics; 2) measuresthe interdisciplinarity of research topics; 3) explores the accuracy of the journal classification systems of the Web of Science and Scopus; 4) examines the role of cognitive distance in grant decisions; and 5) investigates the effect of cognitive distance between collaborators on their research output. The data used in this thesisaremainly from the in-house Web of Science and Scopus databases of the Centre for Science and Technology Studies (CWTS) at Leiden University. Quantitativeanalyses, in particular bibliometric analyses,are the main research methodologies employed in this thesis. Furthermore, this thesis primarily offers methodological contributions, proposing a series of approaches designed to tackle the challenges created by the dynamics of science. While the major contribution of this dissertation lies in the improvement of certain bibliometric approaches, it also enhances the understanding of the current system of science. In particular, the approaches and research findings presented here have implications for various stakeholders, including publishing organizations, bibliographic database producers, research policy makers, and research funding agencies. Indeed, these approaches could be built into a software tool and thereby be made available to researchers beyond the field of bibliometric studies.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2016. p. viii, 48
Series
TRITA-IEO, ISSN 1100-7982 ; 2016:04
Keywords
science dynamics, bibliometrics, emerging research topics, interdisciplinary research, journal classification systems, cognitive distance, research policy
National Category
Other Social Sciences not elsewhere specified
Identifiers
urn:nbn:se:kth:diva-184724 (URN)978-91-7595-929-0 (ISBN)
Public defence
2016-04-29, Sal F3, Lindstedtsvägen 26, KTH Campus, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20160406

Available from: 2016-04-06 Created: 2016-04-04 Last updated: 2025-05-05Bibliographically approved
Wang, Q. & Sandström, U. (2015). Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology. Research Evaluation, 24, 271-281
Open this publication in new window or tab >>Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology
2015 (English)In: Research Evaluation, ISSN 0958-2029, E-ISSN 1471-5449, Vol. 24, p. 271-281Article in journal (Refereed) Published
Abstract [en]

The aim of this paper is twofold: (1) to provide a methodology formeasurement ofcognitive distance between researchers and (2) to explorethe role of cognitive distance on the results of peer review processes. Citedreferencesand the contentof articlesare used to represent their respective scientific knowledge bases. Based on the two different approaches—Author-Bibliographic Coupling analysis andAuthor-Topic analysis—we apply the methodologyon a recent competition for grants from the Swedish Strategic Foundation.Results indicate thatcognitive distances between applicants and reviewersmight influence peer review results, but that the impact is to some extent at the unexpectedend. The main contribution of this paper is the elaborationon the relevance of the concept of cognitive distance to the issue of research evaluationingeneral, and especially in relation to peer review as a model used in grant decisions.

Place, publisher, year, edition, pages
Oxford University Press, 2015
National Category
Other Social Sciences not elsewhere specified
Identifiers
urn:nbn:se:kth:diva-184721 (URN)10.1093/reseval/rvv009 (DOI)000358019000004 ()2-s2.0-84942242243 (Scopus ID)
Note

QC 20160406

Available from: 2016-04-04 Created: 2016-04-04 Last updated: 2025-05-05Bibliographically approved
Wang, Q. & Ludo, W. (2015). Large-scale comparison between the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10, 347-364
Open this publication in new window or tab >>Large-scale comparison between the journal classification systems of Web of Science and Scopus
2015 (English)In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, p. 347-364Article in journal (Refereed) Published
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-184719 (URN)
Note

QC 20160406

Available from: 2016-04-04 Created: 2016-04-04 Last updated: 2024-03-18Bibliographically approved
Wang, Q. (2015). Measuring Interdisciplinarity of a Given Body of Research. In: PROCEEDINGS OF ISSI 2015 ISTANBUL: 15TH INTERNATIONAL SOCIETY OF SCIENTOMETRICS AND INFORMETRICS CONFERENCE. Paper presented at 15th International Conference of the International-Society-for-Scientometrics-and-Informetrics (ISSI) on Scientometrics and Informetrics, JUN 29-JUL 04, 2015, Bogazici Univ, Istanbul, TURKEY (pp. 372-383). Leuven University Press
Open this publication in new window or tab >>Measuring Interdisciplinarity of a Given Body of Research
2015 (English)In: PROCEEDINGS OF ISSI 2015 ISTANBUL: 15TH INTERNATIONAL SOCIETY OF SCIENTOMETRICS AND INFORMETRICS CONFERENCE, Leuven University Press, 2015, p. 372-383Conference paper, Published paper (Refereed)
Abstract [en]

Identifying interdisciplinary research topics is an essential subject, not only for research policy but also research funding agencies. Previous research was constructed on measuring interdisciplinarity mainly at the macro level of research, such as Web of Science subject category and journal. However, these studies lack analysis at the micro level of the current science system. It means few studies have analyzed interdisciplinarity at the level of publications. To cover this gap, we introduce an approach for measuring interdisciplinarity at the level of micro research topics. The research topics are clustered by direct citation relations in a large scale database. According to the characteristics of boundary-crossing research, we provide an alternative approach to measure interdisciplinarity. Comparing with the widely used Rao-Stirring indicator (Integration score), we found that the results obtained by two indicators of interdisciplinarity have a strong correlation, thus we believe that this approach could effectively identify boundary-crossing research topics.

Place, publisher, year, edition, pages
Leuven University Press, 2015
Series
Proceedings of the International Conference on Scientometrics and Informetrics, ISSN 2175-1935
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-191668 (URN)000380499700052 ()2-s2.0-84991056829 (Scopus ID)978-975-518-381-7 (ISBN)
External cooperation:
Conference
15th International Conference of the International-Society-for-Scientometrics-and-Informetrics (ISSI) on Scientometrics and Informetrics, JUN 29-JUL 04, 2015, Bogazici Univ, Istanbul, TURKEY
Note

QC 20160908

Available from: 2016-09-08 Created: 2016-09-02 Last updated: 2024-03-18Bibliographically approved
Organisations

Search in DiVA

Show all publications