Measures of research productivity (e.g. peer reviewed papers per researcher) is a fundamental part of bibliometric studies, but is often restricted by the properties of the data available. This paper addresses that fundamental issue and presents a detailed method for estimation of productivity (peer reviewed papers per researcher) based on data available in bibliographic databases (e.g. Web of Science and Scopus). The method can, for example, be used to estimate average productivity in different fields, and such field reference values can be used to produce field adjusted production values. Being able to produce such field adjusted production values could dramatically increase the relevance of bibliometric rankings and other bibliometric performance indicators. The results indicate that the estimations are reasonably stable given a sufficiently large data set.
Segregated impurities at grain boundaries can dramatically change the mechanical behavior of metals, while the mechanism is still obscure in some cases. Here, we suggest a unified approach to investigate segregation and its effects on the mechanical properties of polycrystalline alloys using the example of 3sp impurities (Mg, Al, Si, P, or S) at a special type Sigma 5(310)[001] tilt grain boundary in Cu. We show that for these impurities segregating to the grain boundary, the strain contribution to the work of grain boundary decohesion is small and that the chemical contribution correlates with the electronegativity difference between Cu and the impurity. The strain contribution to the work of dislocation emission is calculated to be negative, while the chemical contribution is calculated to be always positive. Both the strain and chemical contributions to the work of dislocation emission generally become weaker with the increasing electronegativity from Mg to S. By combining these contributions together, we find, in agreement with experimental observations, that a strong segregation of S can reduce the work of grain boundary separation below the work of dislocation emission, thus embrittling Cu, while such an embrittlement cannot be produced by a P segregation because it lowers the energy barrier for dislocation emission relatively more than for work separation.
In this paper we apply large-scale review methodologies and introduce the PPA model - Publication to Patent Analysis ‑ in order to map recent developments in science and technology frontiers of biorefinery activities. Using a bibliometrical approach in the PPA model, we aim to detect and analyze scientific and technological trends in a potentially huge sector. This study takes on the challenge of closing the gap between two strands of analysis - publication analysis and a parallel patent analysis. Bibliometrics is used in order to provide the basis for identification of Science & Technology frontiers, i.e. clusters. Moreover, the data analysis can be used for probabilistic topic modelling of clusters created from publication and patent data. In the studies of knowledge transfer between science and technology this kind of approach has previously been applied only at the micro level. We provide a novel approach proposed for analyses of broader technology sectors which may support policymakers, R&D funding agencies as well as industrial management in the production of scientific and technological intelligence. In our case study results indicate that old forest industry nations in Europe (Austria, Germany, and Sweden) seem at best reactive and pragmatic leaning on their industrial specialization. In general, the EU countries are lagging behind when it comes to new and fast-growing science fields based on non-woody biomass materials and product fields and instead concentrated to more mature or slow-growing wood-based activities.
Any type of scientific study or evaluation of research quality and impact enters into two types of problems if thereis more than one topic area involved in the study: (1) How to account for differences in (paper) production? (2)How to account for differences in citation impact, i.e. influence over subsequent literature? This paper aims toshow that these questions can be answered with the help of two methods; the Field Adjusted Production (FAP)indicator and a percentile indicator which is designed to include the FAP. Consequently, they are used incombination in order to express a score that includes both paper production an impact into one figure. Thereby isconstructed a score that can be used for ranking of universities, departments, individuals. The paper first explainsthe background of the method, and then how to calculate the indicators belonging to the P-Model. Then the paperindicates some examples and will discuss methods for validation of the proposed indicator.
När svensk arbetsorganisatorisk forskning nyligen utvärderades var resultateten blandning av tillförsikt och oro för verksamheten. Visst det går bra,men tveksamheter kring förnyelseverksamheten ställde allvarliga frågorför framtiden. Sverige ligger sällsynt bra inom delområden av arbetslivsforskningsom har stabil tillväxt och som ger gott resultat i form kollegialuppmärksamhet. Mycket talar för att svensk forskning har förankrat sigi ett antal starka paradområden och att detta inneburit inlåsning tillområden som möjligen kan komma att tappa i betydelse på längre sikt.Nya delområden inom arbetslivsforskningen täcks inte alls av de svenska forskarna eller i alla fall inte i förväntad utsträckning. Den svenska forskningsportföljen är relativt koncentrerad och riskerar därför att bli en black om foten om och när ordentliga framsteg görs inom nya områden.
During 2015, all research performed from 2008 to 2014 at Örebro University, as well as research at Örebro University Hospital, has been evaluated. This report – ORU2015 – presents the background, planning and implementation of the research assessment and its results. Chapter I includes the panel evaluations, and chapter II presents the bibliometric analysis.
The concept of cognitive similarity, developed by Travis and Collins (1991), is the starting point for this paper. We suggest that cognitive similarity is detectable through bibliometric analysis using bibliographic coupling (Kessler, 1963) or, as an alternative, noun phrases in title and abstract. Connected to this hypothesis is the possibility of cognitive bias in peer review. If academics tend to give higher scores to research with which the reviewer has a cognitive similarity there is a situation of cognitive bias. The design of the research project is described and the data sources available are discussed. With data on applicants and reviewers, and complemented with bibliometric identification of each individuals publications, this project will potentially give an essential contribution to our understanding of the peer review process.
This paper demonstrates the benefits of combining curriculum vitae studies with advanced bibliometrics. Based on data from 326 CVs within one broad medical subject area we perform a cluster analysis of CV data. Data reduction produces four different groups of scientists: 1) mobile, 2) immobile, 3) excellent and 4) entrepreneurial. While it is clear that the most mobile and the least mobile researchers represent opposites also in citation performance we should acknowledge that for the large majority, with a low and medium mobility, there is no linear pattern of performance. The paper points at a double process where there are on the one hand selection processes at universities picking out 'the winners' and on the other hand self selection processes where researchers enhance their own performance by being mobile.
Evaluating whether a portfolio of funded research projects (of a research council), or a portfolio of research papers (the output of a university) is relevant for science and for society required two-dimensional mapping of the project portfolio: (i) projecting the portfolio on a science map showing how the portfolio fits into and possibly shapes the research fronts, and (ii) projecting the portfolio on a map of societal challenges, showing where the portfolio links to societal problem solving or innovation. This requires evaluating in two different 'languages': a technical language relating projects to the research front, and a societal language relating the projects to societal challenges. In this paper, we demonstrate a method for doing so, using the SMS-platform. The advantage is that the method is much less dependent on subjective classifications by single experts or a small group of experts, and that it is rather user-friendly Evaluating research portfolios, a method and a case. Available from:
The selection of grant applications generally is based on peer and panel review, but as shown in many studies, the outcome of this process does not only depend on the scientific merit or excellence, but also on social factors, and on the way the decision-making process is organized. A major criticism on the peer review process is that it is inherently conservative, with panel members inclined to select applications that are line with their own theoretical perspective. In this paper we define 'cognitive distance' and operationalize it. We apply the concept, and investigate whether it influences the probability to get funded.
Denna bok skildrar Byggforskningsrådet som organisation och forskningsfinansierande organ i relation till såväl forskningspolitiska som allmänpolitiska frågeställningar under perioden 1960-1992. Sex år efter bokens utgivning gick BFR i graven staten lade om den svenska forsk-ningsorganisationen och avslutade den traditionella sektorsforskningspolitiken. I föreliggande upplaga har vissa avsnitt som bedömts vara av mindre intresse tagits bort samt smärre språk-liga justeringar genomförts. (1) Mellan politik och forskning: Byggforskningsrådet 1960-1992. Available from: https://www.researchgate.net/publication/319624608_Mellan_politik_och_forskning_Byggforskningsradet_1960-1992 [accessed Sep 11, 2017].
I t is often argued that the presence of stakeholders in review panels may improve the selection of societal relevant research projects. In this paper, we investigate whether the composition of panels indeed matters. More precisely, when stakeholders are in the panel, does that result in more positive evaluation of proposals of relevance to that stakeholder? We investigate thisfor the gender issues domain, and show that this is the case. When stakeholders are present, the relevant projects obtain a more positive evaluation and consequently a higher score. If these findingscan be generalised, they are an important insight for the creation of pathways to and conditions for impact.
This research paper in progress discusses some of the common criticisms of peer review: Costs and Robustness, Nepotism (conflict of interest), Sexism and Cognitive Bias. Attention is given to the fact that much of the research reported fails on a crucial point: The use of bibliometrics as a correlate for the grading and ranking done by granting or evaluation committees (ad hoc or standing committees). The full paper will extend the analysis using data from a selection of finished projects and assessments. Results indicate that there are systemic problems regarding peer review: Firstly, the positive bias in university assessments based on ad hoc committees. Problems circulate around the absence of robust benchmarks and the ad hoc selection of experts. Secondly, the role of cognitive distance points to the power mechanisms in selection processes for finding relevant reviewers. Thirdly, the low levels of peer's performance (in bibliometric respect) indicate that selection of peers is no longer to search for the best possible peer, but instead, the pragmatic peer.
We analyze the relation between funding and output using bibliometric methods with field normalized data. Our approach is to connect individual researcher data on funding from Swedish university databases to data on incoming grants using the specific personal ID-number. Data on funding include the person responsible for the grant. All types of research income are considered in the analysis yielding a project database with a high level of precision. Results show that productivity can be explained by background variables, but that quality of research is more or less un-related to background variables.
Swedish Political Science: a bibliometric analysis. Citations, productivity measures and rankingshave become reality in modern university life. Many of the bibliometric reportspresented by ranking institutes and non-professional bibliometricians are flawed dueto methodologically unsound procedures. This article discusses three important methodologicalproblems involved in bibliometric studies: 1) number of personnel at universitydepartments; 2) counting of articles from these departments; and 3) countingof citations to these articles. Relating to earlier research (Hix, 2004) it is shown thatthe counting of personnel - a very important reference value - can be conducted inseveral different ways. Following Dale & Goldfinch (2005) we discuss the limitationto political science journals proposed by Hix. There is a large influx of non politicalscientist to the area and a similar outflow of political scientists to other journal categories(e.g. environmental studies). Therefore, the proposed limitation is questioned.Implementing advanced methods for field normalized citation scores (van Raan,2004) we conclude the article with an analysis of Swedish university departments inpolitical science during the period 1998-2005. The result is a promising 33 per centbetter citation scores than the world average, but the downside is a low number of articlesper researcher.
It is often argued that female researchers publish on average less than male researchers do, but male and female authored papers have an equal impact. In this paper we try to better understand this phenomenon by (i) comparing the share of male and female researchers within different productivity classes, and (ii) by comparing productivity whereas controlling for a series of relevant covariates. The study is based on a disambiguated Swedish author dataset, consisting of 47,000 researchers and their WoS-publications during the period of 2008-2011 with citations until 2015. As the analysis shows, in order to have impact quantity does make a difference for male and female researchers alike—but women are vastly underrepresented in the group of most productive researchers. We discuss and test several possible explanations of this finding, using a data on personal characteristics from several Swedish universities. Gender differences in age, authorship position, and academic rank do explain quite a part of the productivity differences.
We analyze the relation between funding and output using bibliometric methods with field normalized data. Our approach is to create a connection between bibliometric data at the individual researcher level and data on incoming grant (funding) using the specific personal ID-number (social security code). Data on funding include the person responsible for the grant. All types of research income are considered in the analysis yielding a project database with a high level of precision. Results show that productivity can be explained by background variables, but that quality of research is un-related to background variables. This is a paper in progress: our ambition is to extend it in several ways: theoretically, analytically and empirically.
In a replication of the high-profile contribution by Wenneras and Wold on grant peer-review, we investigate new applications processed by the medical research council in Sweden. Introducing a normalisation method for ranking applications that takes into account the differences between committees, we also use a normalisation of bibliometric measures by field. Finally, we perform a regression analysis with interaction effects. Our results indicate that female principal investigators (PIs) receive a bonus of 10% on scores, in relation to their male colleagues. However, male and female PIs having a reviewer affiliation collect an even higher bonus, approximately 15%. Nepotism seems to be a persistent problem in the Swedish grant peer review system.
The aim of this paper is to demonstrate a method for bibliometric evaluation of individuals, i.e. research staff currently employed within a university department or other knowledge organisations with research purposes. Based on methods for citation analysis and methods for clustering of papers into research lines ( using bibliographic coupling) we present an analysis of one researcher in three dimensions: 1) publication and citation indicators; 2) publication profile, and 3) research lines. One of the features of the method is the benchmark against other researchers within the same research line, i.e. researchers that use the same references and, accordingly, are active in the same field of research. The paper suggests this method as a means to avoid the fallacies of evaluation solely dependent on sub-field categories in the Web of Science in advanced citation analysis. The method was used in a Research Assessment Exercise accomplished in the autumn of 2008 at Royal Institute of Technology.
We present a new model for performance-related funding of universities in Sweden. The model is based oil number of papers in international scientific journals, but relies oil an estimation of field-adjusted production per scientific/technological area Author counts are based oil potential authors using the Waring distribution for 34 areas of science (Schubert and Braun, 1992) We apply this model to the Swedish university system and illustrate with the reallocations that Would follow from a complete implementation. Next, we test the accuracy of the method using publication data from six Swedish universities and four Norwegian universities. In conclusion we discuss advantages and drawbacks with the method.
Understanding the quality of science systems requires international comparative studies, which are difficult because of the lack of comparable data especially about inputs in research. In this study, we deploy an approach based on change instead of on levels of inputs and outputs: an approach that to a large extent eliminates the problem of measurement differences between countries. We firstly show that there are large differences in efficiency between national science systems, defined as the increase in output (highly cited papers) per percentage increase in input (funding). We then discuss our findings using popular explanations of performance differences: differences in funding systems (performance related or not), differences in the level of competition, differences in the level of university autonomy, and differences in the level of academic freedom. Interestingly, the available data do not support these common explanations. What the data suggest is that efficient systems are characterized by a well-developed ex post evaluation system combined with considerably high institutional funding and relatively low university autonomy (meaning a high autonomy of professionals). On the other hand, the less efficient systems have a strong ex ante control, either through a high level of so-called competitive project funding, or through strong power of the university management. Another conclusion is that more and better data are needed.
This paper investigates what factors affect the performance of research teams. We combine survey data about the team with bibliometric data about the performance of the team. The analysis shows that teams with a few PIs perform better than single PI teams - of course controlling for team size. On the other hand, gender diversity does not have an effect on performance. The good news is that gender objectives can be realized, without any performance problem.
Do highly productive researchers have significantly higher probability to produce top cited papers? Or do high productive researchers mainly produce a sea of irrelevant papers—in other words do we find a diminishing marginal result from productivity? The answer on these questions is important, as it may help to answer the question of whether the increased competition and increased use of indicators for research evaluation and accountability focus has perverse effects or not. We use a Swedish author disambiguated dataset consisting of 48.000 researchers and their WoS-publications during the period of 2008–2011 with citations until 2014 to investigate the relation between productivity and production of highly cited papers. As the analysis shows, quantity does make a difference.
Bibliometric methods depend heavily on the quality of data, and cleaning and disambiguating data are very time-consuming. Therefore, quite some effort is devoted to the development of better and faster tools for disambiguating of the data (e.g., Gurney et al. 2012). Parallel to this, one may ask to what extent data cleaning is needed, given the intended use of the data. To what extent is there a trade-off between the type of questions asked and the level of cleaning and disambiguating required? When evaluating individuals, a very high level of data cleaning is required, but for other types of research questions, one may accept certain levels of error, as long as these errors do not correlate with the variables under study. In this paper, we present an earlier case study with a rather crude way of data handling as it was expected that the unavoidable error would even out. In this paper, we do a sophisticated data cleaning and disambiguation of the same dataset, and then do the same analysis as before. We compare the results and discuss conclusions about required data cleaning What is the Required Level of Data Cleaning? A Research Evaluation Case.
Following the innovative method from the SPRU paper by Hicks and Katz in 1996 we investigated different aspects of the interdisciplinary trends in Europe. The paper uses ISI data covering the period from 1982-2003. The trend towards multi- and interdisciplinarity in the natural, medical and technological sciences is growing stronger over time. In our analysis we use number of publications and citations in different areas of research, countries, sectors and universities. This gives an overview of interdiscplinafity as a phenomenon. Detailed Swedish data is used as a case study. The paper concludes with a short discussion on interdisciplinarity and research level.