Change search
Refine search result
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahlgren, Per
    et al.
    Stockholm University.
    Colliander, Cristian
    Document-document similarity approaches and science mapping: Experimental comparison of five approaches2009In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 3, no 1, p. 49-63Article in journal (Refereed)
  • 2.
    Ahlgren, Per
    et al.
    Stockholm University.
    Waltman, Ludo
    The correlation between citation-based and expert-based assessments of publication channels: SNIP and SJR vs. Norwegian quality assessments2014In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 8, no 4, p. 985-996Article in journal (Refereed)
    Abstract [en]

    We study the correlation between citation-based and expert-based assessments of journals and series, which we collectively refer to as sources. The source normalized impact per paper (SNIP), the Scimago Journal Rank 2 (SJR2) and the raw impact per paper (RIP) indicators are used to assess sources based on their citations, while the Norwegian model is used to obtain expert-based source assessments. We first analyze – within different subject area categories and across such categories – the degree to which RIP, SNIP and SJR2 values correlate with the quality levels in the Norwegian model. We find that sources at higher quality levels on average have substantially higher RIP, SNIP, and SJR2 values. Regarding subject area categories, SNIP seems to perform substantially better than SJR2 from the field normalization point of view. We then compare the ability of RIP, SNIP and SJR2 to predict whether a source is classified at the highest quality level in the Norwegian model or not. SNIP and SJR2 turn out to give more accurate predictions than RIP, which provides evidence that normalizing for differences in citation practices between scientific fields indeed improves the accuracy of citation indicators.

  • 3. Colliander, Cristian
    et al.
    Ahlgren, Per
    Stockholm University.
    The effects and their stability of field normalization baseline on relative performance with respect to citation impact: a case study of 20 natural science departments2011In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 5, no 1, p. 101-113Article in journal (Refereed)
  • 4.
    Koski, Timo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Sandström, Erik
    Sandström, Ulf
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics.
    Towards field-adjusted production: Estimating research productivity from a zero-truncated distribution2016In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, no 4, p. 1143-1152Article in journal (Refereed)
    Abstract [en]

    Measures of research productivity (e.g. peer reviewed papers per researcher) is a fundamental part of bibliometric studies, but is often restricted by the properties of the data available. This paper addresses that fundamental issue and presents a detailed method for estimation of productivity (peer reviewed papers per researcher) based on data available in bibliographic databases (e.g. Web of Science and Scopus). The method can, for example, be used to estimate average productivity in different fields, and such field reference values can be used to produce field adjusted production values. Being able to produce such field adjusted production values could dramatically increase the relevance of bibliometric rankings and other bibliometric performance indicators. The results indicate that the estimations are reasonably stable given a sufficiently large data set.

  • 5.
    Sandström, Ulf
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics.
    Van den Besselaar, Peter
    Funding, evaluation, and the performance of national research systems2018In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 12, no 1, p. 365-384Article in journal (Refereed)
    Abstract [en]

    Understanding the quality of science systems requires international comparative studies, which are difficult because of the lack of comparable data especially about inputs in research. In this study, we deploy an approach based on change instead of on levels of inputs and outputs: an approach that to a large extent eliminates the problem of measurement differences between countries. We firstly show that there are large differences in efficiency between national science systems, defined as the increase in output (highly cited papers) per percentage increase in input (funding). We then discuss our findings using popular explanations of performance differences: differences in funding systems (performance related or not), differences in the level of competition, differences in the level of university autonomy, and differences in the level of academic freedom. Interestingly, the available data do not support these common explanations. What the data suggest is that efficient systems are characterized by a well-developed ex post evaluation system combined with considerably high institutional funding and relatively low university autonomy (meaning a high autonomy of professionals). On the other hand, the less efficient systems have a strong ex ante control, either through a high level of so-called competitive project funding, or through strong power of the university management. Another conclusion is that more and better data are needed.

  • 6. Sjögårde, P.
    et al.
    Ahlgren, Per
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Library, Publication Infrastructure.
    Granularity of algorithmically constructed publication-level classifications of research publications: Identification of topics2018In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 12, no 1, p. 133-152Article in journal (Refereed)
    Abstract [en]

    The purpose of this study is to find a theoretically grounded, practically applicable and useful granularity level of an algorithmically constructed publication-level classification of research publications (ACPLC). The level addressed is the level of research topics. The methodology we propose uses synthesis papers and their reference articles to construct a baseline classification. A dataset of about 31 million publications, and their mutual citations relations, is used to obtain several ACPLCs of different granularity. Each ACPLC is compared to the baseline classification and the best performing ACPLC is identified. The results of two case studies show that the topics of the cases are closely associated with different classes of the identified ACPLC, and that these classes tend to treat only one topic. Further, the class size variation is moderate, and only a small proportion of the publications belong to very small classes. For these reasons, we conclude that the proposed methodology is suitable to determine the topic granularity level of an ACPLC and that the ACPLC identified by this methodology is useful for bibliometric analyses. 

  • 7. van den Besselaar, P.
    et al.
    Sandström, Ulf
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics. Örebro University, Sweden.
    Early career grants, performance, and careers: A study on predictive validity of grant decisions2015In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 9, no 4, p. 826-838, article id 580Article in journal (Refereed)
    Abstract [en]

    The main rationale behind career grants is helping top talent to develop into the next generation leading scientists. Does career grant competition result in the selection of the best young talents? In this paper we investigate whether the selected applicants are indeed performing at the expected excellent level-something that is hardly investigated in the research literature.We investigate the predictive validity of grant decision-making, using a sample of 260 early career grant applications in three social science fields. We measure output and impact of the applicants about ten years after the application to find out whether the selected researchers perform ex post better than the non-successful ones. Overall, we find that predictive validity is low to moderate when comparing grantees with all non-successful applicants. Comparing grantees with the best performing non-successful applicants, predictive validity is absent. This implies that the common belief that peers in selection panels are good in recognizing outstanding talents is incorrect. We also investigate the effects of the grants on careers and show that recipients of the grants do have a better career than the non-granted applicants. This makes the observed lack of predictive validity even more problematic.

  • 8. Van Den Besselaar, P.
    et al.
    Sandström, Ulf
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics.
    Quantity matters, but how does it work?2018In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 12, no 4, p. 1059-1062Article in journal (Refereed)
  • 9. Van Den Besselaar, Peter
    et al.
    Heyman, Ulf
    Sandström, Ulf
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics.
    Do observations have any role in science policy studies? A reply2017In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 11, no 3, p. 941-944Article in journal (Refereed)
    Abstract [en]

    In Van den Besselaar et al. (2017) we tested the claim of Linda Butler (2003) that funding systems based on output counts have a negative effect on impact as well as quality. Using new data and improved indicators, we indeed reject the claim of Butler. The impact of Australian research improved after the introduction of such a system, and did not decline as Butler states. In their comments on our findings, Linda Butler, Jochen Gläser, Kaare Aagaard & Jesper Schneider, Ben Martin, and Diana Hicks put forward a lot of arguments, but do not dispute our basic finding: citation impact of Australian research went up, immediately after the output based performance system was introduced. It is important to test the findings of Butler about Australia – as these findings are part of the accepted knowledge in the field, heavily cited, often used in policy reports, but hardly confirmed in other studies. We found that the conclusions of Butler are wrong, and that many of the policy implications based on it simply are unfounded. In our study, we used better indicators, and a similar causality concept as our opponents. And our findings are independent of the exact timing of the policy intervention. Furthermore, our commenters have not addressed our main conclusions at all, and some even claim that observations do not really matter in the social sciences. We find this position problematic − why would the taxpayer fund science policy studies, if it is merely about opinions? Let’s take science seriously − including our own field Do observations have any role in science policy studies? A reply. Available from: https://www.researchgate.net/publication/318312568_Do_observations_have_any_role_in_science_policy_studies_A_reply [accessed Sep 11, 2017].

  • 10. Van Den Besselaar, Peter
    et al.
    Heyman, Ulf
    Sandström, Ulf
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics.
    Perverse effects of output-based research funding? Butler's Australian case revisited2017In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 11, no 3, p. 905-918Article in journal (Refereed)
    Abstract [en]

    More than ten years ago, Linda Butler (2003a) published a well-cited article claiming that the Australian science policy in the early 1990s made a mistake by introducing output based funding. According to Butler, the policy stimulated researchers to publish more but at the same time less good papers, resulting in lower total impact of Australian research compared to other countries. We redo and extend the analysis using longer time series, and show that Butlers’ main conclusions are not correct. We conclude in this paper (i) that the currently available data reject Butler’s claim that “journal publication productivity has increased significantly… but its impact has declined”, and (ii) that it is hard to find such evidence also with a reconstruction of her data. On the contrary, after implementing evaluation systems and performance based funding, Australia not only improved its share of research output but also increased research quality, implying that total impact was greatly increased. Our findings show that if output based research funding has an effect on research quality, it is positive and not negative. This finding has implications for the discussions about research evaluation and about assumed perverse effects of incentives, as in those debates the Australian case plays a major role.

  • 11.
    Wang, Qi
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability and Industrial Dynamics.
    Ludo, Waltman
    Large-scale comparison between the journal classification systems of Web of Science and Scopus2015In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, p. 347-364Article in journal (Refereed)
  • 12.
    Wang, Qi
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Waltman, L.
    Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus2016In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, no 2, p. 347-364Article in journal (Refereed)
    Abstract [en]

    Journal classification systems play an important role in bibliometric analyses. The two most important bibliographic databases, Web of Science and Scopus, each provide a journal classification system. However, no study has systematically investigated the accuracy of these classification systems. To examine and compare the accuracy of journal classification systems, we define two criteria on the basis of direct citation relations between journals and categories. We use Criterion I to select journals that have weak connections with their assigned categories, and we use Criterion II to identify journals that are not assigned to categories with which they have strong connections. If a journal satisfies either of the two criteria, we conclude that its assignment to categories may be questionable. Accordingly, we identify all journals with questionable classifications in Web of Science and Scopus. Furthermore, we perform a more in-depth analysis for the field of Library and Information Science to assess whether our proposed criteria are appropriate and whether they yield meaningful results. It turns out that according to our citation-based criteria Web of Science performs significantly better than Scopus in terms of the accuracy of its journal classification system.

  • 13. Yang, Guoliang
    et al.
    Ahlgren, Per
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Library, Publication Infrastructure.
    Yang, Liying
    Rousseau, Ronald
    Ding, Jielan
    Reply to 'Comment on "Using multi-level frontiers in DEA models to grade countries/territories" by G.-I. Yang et al. [Journal of Informetrics 10(1) (2016), 238-253]'2017In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 11, no 3, p. 647-648Article in journal (Refereed)
  • 14. Yang, Guoliang
    et al.
    Ahlgren, Per
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Publication Infrastructure.
    Yang, Liying
    Rousseau, Ronald
    Ding, Jielan
    Using multi-level frontiers in DEA models to grade countries/territories2016In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, no 1, p. 238-253Article in journal (Refereed)
    Abstract [en]

    Several investigations to and approaches for categorizing academic journals/institutions/countries into different grades have been published in the past. To the best of our knowledge, most existing grading methods use either a weighted sum of quantitative indicators (including the case of one properly defined quantitative indicator) or quantified peer review results. Performance measurement is an important issue of concern for science and technology (S&T) management. In this paper we address this issue, leading to multi-level frontiers resulting from data envelopment analysis (DEA) models to grade selected countries/territories. We use research funding and researchers as input indicators, and take papers, citations and patents as output indicators. Our research results show that using DEA frontiers we can unite countries/territories by different grades. These grades reflect the corresponding countries' levels of performance with respect to multiple inputs and outputs. Furthermore, we use papers, citations and patents as single output (with research funding and researchers as inputs), respectively, to show country/territory grade changes. In order to increase the insight in this approach, we also incorporate a simple value judgment (that the number of citations is more important than the number of papers) as prior information into the DEA models to study the resulting changes of these Countries/Territories' performance grades.

1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf