Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 52) Show all publications
Sandström, U. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12(1), 365-384
Open this publication in new window or tab >>Funding, evaluation, and the performance of national research systems
2018 (English)In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 12, no 1, p. 365-384Article in journal (Refereed) Published
Abstract [en]

Understanding the quality of science systems requires international comparative studies, which are difficult because of the lack of comparable data especially about inputs in research. In this study, we deploy an approach based on change instead of on levels of inputs and outputs: an approach that to a large extent eliminates the problem of measurement differences between countries. We firstly show that there are large differences in efficiency between national science systems, defined as the increase in output (highly cited papers) per percentage increase in input (funding). We then discuss our findings using popular explanations of performance differences: differences in funding systems (performance related or not), differences in the level of competition, differences in the level of university autonomy, and differences in the level of academic freedom. Interestingly, the available data do not support these common explanations. What the data suggest is that efficient systems are characterized by a well-developed ex post evaluation system combined with considerably high institutional funding and relatively low university autonomy (meaning a high autonomy of professionals). On the other hand, the less efficient systems have a strong ex ante control, either through a high level of so-called competitive project funding, or through strong power of the university management. Another conclusion is that more and better data are needed.

Keywords
Research policy; Input-output studies; Performance-based funding; Research efficiency; Bibliometrics; Citations
National Category
Social Sciences
Research subject
Industrial Economics and Management
Identifiers
urn:nbn:se:kth:diva-223588 (URN)10.1016/j.joi.2018.01.007 (DOI)000427479800026 ()2-s2.0-85042382770 (Scopus ID)
Funder
Riksbankens Jubileumsfond, P12-1302:1
Note

QC 20180308

Available from: 2018-02-23 Created: 2018-02-23 Last updated: 2018-05-04Bibliographically approved
Van Den Besselaar, P. & Sandström, U. (2018). Quantity matters, but how does it work?. Journal of Informetrics, 12(4), 1059-1062
Open this publication in new window or tab >>Quantity matters, but how does it work?
2018 (English)In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 12, no 4, p. 1059-1062Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Elsevier Ltd, 2018
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-236619 (URN)10.1016/j.joi.2018.08.007 (DOI)000451074800004 ()2-s2.0-85052868809 (Scopus ID)
Note

QC 20181119

Available from: 2018-11-19 Created: 2018-11-19 Last updated: 2018-12-10Bibliographically approved
van den Besselaar, P., Sandström, U. & Schiffbaenker, H. (2018). Studying grant decision-making: a linguistic analysis of review reports. Scientometrics, 118(1), 313-329
Open this publication in new window or tab >>Studying grant decision-making: a linguistic analysis of review reports
2018 (English)In: Scientometrics, ISSN 0138-9130, E-ISSN 1588-2861, Vol. 118, no 1, p. 313-329Article in journal (Refereed) Published
Abstract [en]

Peer and panel review are the dominant forms of grant decision-making, despite its serious weaknesses as shown by many studies. This paper contributes to the understanding of the grant selection process through a linguistic analysis of the review reports. We reconstruct in that way several aspects of the evaluation and selection process: what dimensions of the proposal are discussed during the process and how, and what distinguishes between the successful and non-successful applications? We combine the linguistic findings with interviews with panel members and with bibliometric performance scores of applicants. The former gives the context, and the latter helps to interpret the linguistic findings. The analysis shows that the performance of the applicant and the content of the proposed study are assessed with the same categories, suggesting that the panelists actually do not make a difference between past performance and promising new research ideas. The analysis also suggests that the panels focus on rejecting the applications by searching for weak points, and not on finding the high-risk/high-gain groundbreaking ideas that may be in the proposal. This may easily result in sub-optimal selections, in low predictive validity, and in bias.

Place, publisher, year, edition, pages
Springer, 2018
Keywords
Peer review; Panel review; Research grants; Decision-making Linguistics LIWC; European Research Council (ERC)
National Category
Social Sciences
Identifiers
urn:nbn:se:kth:diva-235172 (URN)10.1007/s11192-018-2848-x (DOI)2-s2.0-85049856026 (Scopus ID)
Funder
EU, European Research Council
Note

QC 20180918

Available from: 2018-09-17 Created: 2018-09-17 Last updated: 2018-10-16Bibliographically approved
van den Besselaar, P. & Sandström, U. (2017). Counterintuitive effects of incentives?. Research Evaluation, 26(4), 349-351
Open this publication in new window or tab >>Counterintuitive effects of incentives?
2017 (English)In: Research Evaluation, ISSN 0958-2029, E-ISSN 1471-5449, Vol. 26, no 4, p. 349-351Article in journal, Editorial material (Other academic) Published
Abstract [en]

A recent paper in this journal compares the Norwegian model of using publications counts for university funding with a similar intervention in Australia in the mid-1990 s. The authors argue that the Norwegian model (taking into account the quality of publications) performs better than the Australian (which did neglect paper quality other than being peer reviewed). We argue that these conclusions are in contrast to the evidence provided in the article, and therefore should be considered incorrect.

Place, publisher, year, edition, pages
OXFORD UNIV PRESS, 2017
National Category
Information Studies
Identifiers
urn:nbn:se:kth:diva-217443 (URN)10.1093/reseval/rvx029 (DOI)000413999100008 ()2-s2.0-85032747126 (Scopus ID)
Note

QC 20171117

Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2017-11-17Bibliographically approved
Sandström, U. (2017). Evaluating research portfolios, a method and a case. In: STI 2017 - Science, Technology and Innovation indicators: Open indicators: innovation, participation and actor-based STI indicators Paris 2017. Paper presented at STI2017, 6 – 8 September 2017.
Open this publication in new window or tab >>Evaluating research portfolios, a method and a case
2017 (English)In: STI 2017 - Science, Technology and Innovation indicators: Open indicators: innovation, participation and actor-based STI indicators Paris 2017, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Evaluating whether a portfolio of funded research projects (of a research council), or a portfolio of research papers (the output of a university) is relevant for science and for society required two-dimensional mapping of the project portfolio: (i) projecting the portfolio on a science map showing how the portfolio fits into and possibly shapes the research fronts, and (ii) projecting the portfolio on a map of societal challenges, showing where the portfolio links to societal problem solving or innovation. This requires evaluating in two different 'languages': a technical language relating projects to the research front, and a societal language relating the projects to societal challenges. In this paper, we demonstrate a method for doing so, using the SMS-platform. The advantage is that the method is much less dependent on subjective classifications by single experts or a small group of experts, and that it is rather user-friendly Evaluating research portfolios, a method and a case. Available from:

National Category
Social Sciences
Research subject
Industrial Economics and Management
Identifiers
urn:nbn:se:kth:diva-214068 (URN)
Conference
STI2017, 6 – 8 September 2017
Note

QC 20171018

Available from: 2017-09-11 Created: 2017-09-11 Last updated: 2018-06-19Bibliographically approved
Sandström, U. (2017). Influence of cognitive distance on grant decisions. In: STI 2017 - Science, Technology and Innovation indicators: Open indicators: innovation, participation and actor-based STI indicators. Paper presented at STI2017, 6 – 8 September 2017.
Open this publication in new window or tab >>Influence of cognitive distance on grant decisions
2017 (English)In: STI 2017 - Science, Technology and Innovation indicators: Open indicators: innovation, participation and actor-based STI indicators, 2017Conference paper, Published paper (Refereed)
Abstract [en]

The selection of grant applications generally is based on peer and panel review, but as shown in many studies, the outcome of this process does not only depend on the scientific merit or excellence, but also on social factors, and on the way the decision-making process is organized. A major criticism on the peer review process is that it is inherently conservative, with panel members inclined to select applications that are line with their own theoretical perspective. In this paper we define 'cognitive distance' and operationalize it. We apply the concept, and investigate whether it influences the probability to get funded.

National Category
Social Sciences
Research subject
Industrial Economics and Management
Identifiers
urn:nbn:se:kth:diva-214067 (URN)
Conference
STI2017, 6 – 8 September 2017
Note

QC 20171018

Available from: 2017-09-11 Created: 2017-09-11 Last updated: 2017-10-18Bibliographically approved
Sandström, U. (2016). Arbetslivsforskningens framtid i bibliometriskt perspektiv. In: Åke Sandberg (Ed.), På jakt efter framtidens arbete: Utmaningar i arbetets organisering och forskning (pp. 188-197). Tankesmedjan Tiden
Open this publication in new window or tab >>Arbetslivsforskningens framtid i bibliometriskt perspektiv
2016 (Swedish)In: På jakt efter framtidens arbete: Utmaningar i arbetets organisering och forskning / [ed] Åke Sandberg, Tankesmedjan Tiden , 2016, p. 188-197Chapter in book (Other academic)
Abstract [sv]

När svensk arbetsorganisatorisk forskning nyligen utvärderades var resultateten blandning av tillförsikt och oro för verksamheten. Visst det går bra,men tveksamheter kring förnyelseverksamheten ställde allvarliga frågorför framtiden. Sverige ligger sällsynt bra inom delområden av arbetslivsforskningsom har stabil tillväxt och som ger gott resultat i form kollegialuppmärksamhet. Mycket talar för att svensk forskning har förankrat sigi ett antal starka paradområden och att detta inneburit inlåsning tillområden som möjligen kan komma att tappa i betydelse på längre sikt.Nya delområden inom arbetslivsforskningen täcks inte alls av de svenska forskarna eller i alla fall inte i förväntad utsträckning. Den svenska forskningsportföljen är relativt koncentrerad och riskerar därför att bli en black om foten om och när ordentliga framsteg görs inom nya områden.

Place, publisher, year, edition, pages
Tankesmedjan Tiden, 2016
Keywords
bibliometri, arbetsliv
National Category
Other Social Sciences
Research subject
Industrial Engineering and Management
Identifiers
urn:nbn:se:kth:diva-197087 (URN)9789156631672 (ISBN)
Note

QC 20161209

Available from: 2016-11-29 Created: 2016-11-29 Last updated: 2016-12-09Bibliographically approved
van den Besselaar, P. & Sandström, U. (2016). Gender differences in research performance and its impact on careers: a longitudinal case study. Scientometrics, 106(1), 143-162
Open this publication in new window or tab >>Gender differences in research performance and its impact on careers: a longitudinal case study
2016 (English)In: Scientometrics, ISSN 0138-9130, E-ISSN 1588-2861, Vol. 106, no 1, p. 143-162Article in journal (Refereed) Published
Abstract [en]

We take up the issue of performance differences between male and female researchers, and investigate the change of performance differences during the early career. In a previous paper it was shown that among starting researchers gendered performance differences seem small to non-existent (Van Arensbergen et al. 2012). If the differences do not occur in the early career anymore, they may emerge in a later period, or may remain absent. In this paper we use the same sample of male and female researchers, but now compare performance levels about 10 years later. We use various performance indicators: full/fractional counted productivity, citation impact, and relative citation impact in terms of the share of papers in the top 10 % highly cited papers. After the 10 years period, productivity of male researchers has grown faster than of female researcher, but the field normalized (relative) citation impact indicators of male and female researchers remain about equal. Furthermore, performance data do explain to a certain extent why male careers in our sample develop much faster than female researchers' careers; but controlling for performance differences, we find that gender is an important determinant too. Consequently, the process of hiring academic staff still remains biased.

Place, publisher, year, edition, pages
Springer, 2016
Keywords
Gender bias, Academic careers, Performance differences, Longitudinal study
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-182877 (URN)10.1007/s11192-015-1775-3 (DOI)000368075800009 ()2-s2.0-84954397574 (Scopus ID)
Note

QC 20160224

Available from: 2016-02-24 Created: 2016-02-23 Last updated: 2018-09-11Bibliographically approved
Koski, T., Sandström, E. & Sandström, U. (2016). Towards field-adjusted production: Estimating research productivity from a zero-truncated distribution. Journal of Informetrics, 10(4), 1143-1152
Open this publication in new window or tab >>Towards field-adjusted production: Estimating research productivity from a zero-truncated distribution
2016 (English)In: Journal of Informetrics, ISSN 1751-1577, E-ISSN 1875-5879, Vol. 10, no 4, p. 1143-1152Article in journal (Refereed) Published
Abstract [en]

Measures of research productivity (e.g. peer reviewed papers per researcher) is a fundamental part of bibliometric studies, but is often restricted by the properties of the data available. This paper addresses that fundamental issue and presents a detailed method for estimation of productivity (peer reviewed papers per researcher) based on data available in bibliographic databases (e.g. Web of Science and Scopus). The method can, for example, be used to estimate average productivity in different fields, and such field reference values can be used to produce field adjusted production values. Being able to produce such field adjusted production values could dramatically increase the relevance of bibliometric rankings and other bibliometric performance indicators. The results indicate that the estimations are reasonably stable given a sufficiently large data set.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
Research productivity, Waring distribution, Field adjusted production, Size-dependent indicators
National Category
Information Studies Other Social Sciences
Research subject
Industrial Engineering and Management
Identifiers
urn:nbn:se:kth:diva-197085 (URN)10.1016/j.joi.2016.09.002 (DOI)000389548900019 ()2-s2.0-84992025500 (Scopus ID)
Funder
Riksbankens Jubileumsfond, P12-1302:1
Note

QC 20170109

Available from: 2016-11-29 Created: 2016-11-29 Last updated: 2017-11-29Bibliographically approved
Sandström, U. (2016). What is the Required Level of Data Cleaning? A Research Evaluation Case. Journal of Scientometric Research, 5(1), 7-12
Open this publication in new window or tab >>What is the Required Level of Data Cleaning? A Research Evaluation Case
2016 (English)In: Journal of Scientometric Research, ISSN 2321-6654, Vol. 5, no 1, p. 7-12Article in journal (Refereed) Published
Abstract [en]

Bibliometric methods depend heavily on the quality of data, and cleaning and disambiguating data are very time-consuming. Therefore, quite some effort is devoted to the development of better and faster tools for disambiguating of the data (e.g., Gurney et al. 2012). Parallel to this, one may ask to what extent data cleaning is needed, given the intended use of the data. To what extent is there a trade-off between the type of questions asked and the level of cleaning and disambiguating required? When evaluating individuals, a very high level of data cleaning is required, but for other types of research questions, one may accept certain levels of error, as long as these errors do not correlate with the variables under study. In this paper, we present an earlier case study with a rather crude way of data handling as it was expected that the unavoidable error would even out. In this paper, we do a sophisticated data cleaning and disambiguation of the same dataset, and then do the same analysis as before. We compare the results and discuss conclusions about required data cleaning What is the Required Level of Data Cleaning? A Research Evaluation Case.

Place, publisher, year, edition, pages
Wolters Kluwer Health and Medknow Publications, 2016
Keywords
Coupling data sets, Data cleaning disambiguation, Data error
National Category
Other Social Sciences not elsewhere specified
Research subject
Industrial Economics and Management
Identifiers
urn:nbn:se:kth:diva-191463 (URN)
Funder
Riksbankens Jubileumsfond
Note

QC 20160907

Available from: 2016-08-30 Created: 2016-08-30 Last updated: 2017-06-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1292-8239

Search in DiVA

Show all publications