123 1 - 50 of 139
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Lundström, Edvin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the Proxy Modelling of Risk-Neutral Default Probabilities2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Since the default of Lehman Brothers in 2008, it has become increasingly important to measure, manage and price the default risk in financial derivatives. Default risk in financial derivatives is referred to as counterparty credit risk (CCR). The price of CCR is captured in Credit Valuation Adjustment (CVA). This adjustment should in principle always enter the valuation of a derivative traded over-the-counter (OTC).

    To calculate CVA, one needs to know the probability of default of the counterparty. Since CVA is a price, what one needs is the risk-neutral probability of default. The typical way of obtaining risk-neutral default probabilities is to build credit curves calibrated using Credit Default Swaps (CDS). However, for a majority of a bank's counterparties there are no CDSs liquidly traded. This constitutes a major challenge. How does one model the risk-neutral default probability in the absence of observable CDS spreads?

    A number of methods for constructing proxy credit curves have been proposed previously. A particularly popular choice is the so-called Nomura (or cross-section) model. In studying this model, we find some weaknesses, which in some instances lead to degenerate proxy credit curves. In this thesis we propose an altered model, where the modelling quantity is changed from the CDS spread to the hazard rate. This ensures that the obtained proxy curves are valid by construction.

    We find that in practice, the Nomura model in many cases gives degenerate proxy credit curves. We find no such issues for the altered model. In some cases, we see that the differences between the models are minor. The conclusion is that the altered model is a better choice since it is theoretically sound and robust.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-11 10:00 publikt via ZOOM
    Arzpeyma, Niloofar
    KTH, School of Industrial Engineering and Management (ITM).
    Model Developments to Study Some Aspects of Improving Efficiencies in EAF Plants2020Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The aim of this thesis is to investigate some aspects of improvements with respect to the energy consumption and raw material selection as well as the understanding of the influence of uncertainties on the performance in electric arc furnace (EAF) plants. The effect of electromagnetic stirring on the scrap melting and post combustion capacity are investigated in two EAFs by using computation fluid dynamic (CFD) models. The results showed that electromagnetic stirring can contribute to a better heat transfer rate at the melt – scrap interface. The Grashof and Nusselt numbers for both electromagnetic stirring and natural convection were estimated, as well as compared to the data from previous studies. Also, the results of the post-combustion in the duct system were used to predict the concentration of uncombusted CO at the possible position to install an off – gas analysis equipment. Also, modeling of the post-combustion in the whole furnace showed that the post-combustion can be improved by increasing the flow rate of the secondary oxygen in a virtual lance burner (VLB) under the meltdown and refining periods of the process. In order to investigate the influence of additions of raw materials on energy, melt composition and slag properties, a static mass and energy balance model is developed. The distribution ratios for metallic elements and dust parameters are calibrated by using process data from an EAF. The model is then applied to investigate the effect of hot briquetted iron (HBI) additions in that particular EAF. The results showed that these additions resulted in an increased electricity consumption and slag amount. The model is then applied to predict how it is possible to adjust the amount of slag formers to reach a desired MgO saturation level. In addition, a statistical model is developed which simulate the melt composition by applying uncertainties in scrap composition, scrap weighing and element distribution factors. The model can estimate the mean and standard deviations in the element concentration of scraps. The results of the model application in an EAF showed that the simulated melt chemical composition is in good agreement with the measured one, when the estimated values for scraps are applied as data in the model.

    Download full text (pdf)
    fulltext
  • Berg, Simon
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Elfström, Victor
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    IRRBB in a Low Interest Rate Environment2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Financial institutions are exposed to several different types of risk. One of the risks that can have a significant impact is the interest rate risk in the bank book (IRRBB). In 2018, the European Banking Authority (EBA) released a regulation on IRRBB to ensure that institutions make adequate risk calculations. This article proposes an IRRBB model that follows EBA's regulations. Among other things, this framework contains a deterministic stress test of the risk-free yield curve, in addition to this, two different types of stochastic stress tests of the yield curve were made. The results show that the deterministic stress tests give the highest risk, but that the outcomes are considered less likely to occur compared to the outcomes generated by the stochastic models. It is also demonstrated that EBA's proposal for a stress model could be better adapted to the low interest rate environment that we experience now. Furthermore, a discussion is held on the need for a more standardized framework to clarify, both for the institutions themselves and the supervisory authorities, the risks that institutes are exposed to.

    Download full text (pdf)
    fulltext
  • Herron, Christopher
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Zachrisson, André
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Machine Learning Based Intraday Calibration of End of Day Implied Volatility Surfaces2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The implied volatility surface plays an important role for Front office and Risk Management functions at Nasdaq and other financial institutions which require mark-to-market of derivative books intraday in order to properly value their instruments and measure risk in trading activities. Based on the aforementioned business needs, being able to calibrate an end of day implied volatility surface based on new market information is a sought after trait. In this thesis a statistical learning approach is used to calibrate the implied volatility surface intraday. This is done by using OMXS30-2019 implied volatility surface data in combination with market information from close to at the money options and feeding it into 3 Machine Learning models. The models, including Feed Forward Neural Network, Recurrent Neural Network and Gaussian Process, were compared based on optimal input and data preprocessing steps. When comparing the best Machine Learning model to the benchmark the performance was similar, indicating that the calibration approach did not offer much improvement. However the calibrated models had a slightly lower spread and average error compared to the benchmark indicating that there is potential of using Machine Learning to calibrate the implied volatility surface.

    Download full text (pdf)
    fulltext
  • Wang, Nancy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Spectral Portfolio Optimisation with LSTM Stock Price Prediction2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nobel Prize-winning modern portfolio theory (MPT) has been considered to be one of the most important and influential economic theories within finance and investment management. MPT assumes investors to be riskaverse and uses the variance of asset returns as a proxy of risk to maximise the performance of a portfolio. Successful portfolio management reply, thus on accurate risk estimate and asset return prediction. Risk estimates are commonly obtained through traditional asset pricing factor models, which allow the systematic risk to vary over time domain but not in the frequency space. This approach can impose limitations in, for instance, risk estimation. To tackle this shortcoming, interest in applications of spectral analysis to financial time series has increased lately. Among others, the novel spectral portfolio theory and the spectral factor model which demonstrate enhancement in portfolio performance through spectral risk estimation [1][11]. Moreover, stock price prediction has always been a challenging task due to its non-linearity and non-stationarity. Meanwhile, Machine learning has been successfully implemented in a wide range of applications where it is infeasible to accomplish the needed tasks traditionally. Recent research has demonstrated significant results in single stock price prediction by artificial LSTM neural network [6][34]. This study aims to evaluate the combined effect of these two advancements in a portfolio optimisation problem and optimise a spectral portfolio with stock prices predicted by LSTM neural networks. To do so, we began with mathematical derivation and theoretical presentation and then evaluated the portfolio performance generated by the spectral risk estimates and the LSTM stock price predictions, as well as the combination of the two. The result demonstrates that the LSTM predictions alone performed better than the combination, which in term performed better than the spectral risk alone.

    Download full text (pdf)
    fulltext
  • Larsson, Karl
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Valuation of Additional Tier-1 Contingent Convertible Bonds (AT1 CoCo): Accounting for Extension Risk2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The investment and financing instrument AT1, or Contingent Convertible bond, has become popular in the post-crisis capital markets, prompting interest and research in the academic world. The instrument's debt definition but equity boosting properties makes it rather extraordinary, and its stochastic features makes multiple mathematical valuation methodologies relevant, especially with regard to the risk of extending the call date of the instrument. With investors still relying on screening tools for valuation, there is an absence of applications using existing mathematical approaches. This report therefore aims to narrow the gap between academia and industry by evaluating the use of such mathematical approaches in a practical investment setting, in particular the Improved Credit Derivative approach and the Extension Premium Relative Value approach shall be examined. Both models strive to account for the extension risk, a commonly disregarded yet critical risk, adding computational challenges to the implementation. Besides from discovering necessary practical adjustments, and their effects, the two pricing approaches are compared in an attempt to confirm their joint purpose of accounting for extension risk. Ending up with varying results consisting of evident offsets for the improved credit derivative model but significant correlations in the case of the extension premium model, their individual performance was diverse while the hypothesis of joint behaviour could be dismissed.

    Download full text (pdf)
    fulltext
  • Djerf, Adrian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Valuation of Additional Tier-1 Contingent Convertible Bonds (AT1 CoCo): Modelling trigger risk in a practical investment setting2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Contingent convertible bonds (often referred to as CoCo bonds, or simply CoCos) are a relatively new financial instrument designed to absorb unexpected losses. This instrument became increasingly more common after the financial crisis of 2008, as a way to decrease the risk of insolvency among banks and other financial institutions.

    In this thesis, we will investigate two mathematical models for valuation of CoCo bonds, known as the credit derivative approach and the equity derivative approach, previously developed by De Spiegeleer and Schoutens [1]. We will investigate how these models can be modified in order to be applied to a large set of bonds available on the market. The effect of parameter alterations will also be studied, in order to determine which parameters that influence the pricing accuracy the most.

    We reach the conclusion that by estimating market triggers, conversion prices and by computing a continuous interest rate from a discrete rates table, the models are indeed executable on a large set of bonds available on the market. However, these parameter estimations come at the cost of reduced accuracy. In general, both investigated models produces prices which follows the overall movements of the market prices quite well, but at the same time with a relatively large absolute distance from the market prices. In other words, the correlation with the market is often high, but the absolute error (measure by root mean square error) is often large.

    The sensitivity analysis of the parameters shows that the market trigger is the most influential parameter in both investigated models. The fact that we had to estimate the market trigger in order to be able to price a large number of bonds is believed to be the main cause of reduced accuracy.

    By utilizing a more bond-specific parameter estimation, the accuracy of the investigated models could most likely be improved. We can conclude that there is a trade-off between being able to price a large set of bonds with a mediocre accuracy, or being able to price a few bonds with high accuracy.

    Download full text (pdf)
    fulltext
  • Berglund, Pontus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Kamangar, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    An Empirical Study on the Reversal Interest Rate2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Previous research suggests that a policy interest rate cut below the reversal interest rate reverses the intended effect of monetary policy and becomes contractionary for lending. This paper is an empirical investigation into whether the reversal interest rate was breached in the Swedish negative interest rate environment between February 2015 and July 2016. We find that banks with a greater reliance on deposit funding were adversely affected by the negative interest rate environment, relative to other banks. This is because deposit rates are constrained by a zero lower bound, since banks are reluctant to introduce negative deposit rates for fear of deposit withdrawals. We show with a difference-in-differences approach that the most affected banks reduced loans to households and raised 5 year mortgage lending rates, as compared to the less affected banks, in the negative interest rate environment. These banks also experienced a drop in profitability, suggesting that the zero lower bound on deposits caused the lending spread of banks to be squeezed. However, we do not find evidence that the reversal rate has been breached.

    Download full text (pdf)
    fulltext
  • Sultani, Rawand
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rebalancing 2.0-A Macro Approach to Portfolio Rebalancing2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Portfolio rebalancing has become a popular tool for institutional investors the last decade. Adaptive asset allocation, an approach suggest by William Sharpe is a new approach to portfolio rebalancing taking market capitalization of asset classes into consideration when setting the normal portfolio and adapting it to a risk profile. The purpose of this thesis is to evaluate the traditional approach of portfolio rebalancing with the adaptive one. The comparison will consist of backtesting and two simulation methods which will be compared computationally measuring time and memory usage (Monte Carlo and Latin Hypercube Sampling). The comparison was done in Excel and in R respectively. It was found that both of the asset allocation approaches gave similar result in terms of the relevant risk measurements mentioned but that the traditional was a cheaper and easier alternative to implement and therefore might be more preferable over the adaptive approach from a practical perspective. The sampling methods were found to have no difference in memory usage but Monte Carlo sampling had around 50% less average running time while at the same time being easier to implement.

    Download full text (pdf)
    fulltext
  • Sandfeldt, Sven
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Local Rigidity of Some Lie Group Actions2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this paper we study local rigidity of actions of simply connected Lie groups. In particular, we apply the Nash-Moser inverse function theorem to give sufficient conditions for the action of a simply connected Lie group to be locally rigid. Let $G$ be a Lie group, $H < G$ a simply connected subgroup and $\Gamma < G$ a cocompact lattice. We apply the result for general actions of simply connected groups to obtain sufficient conditions for the action of $H$ on $\Gamma\backslash G$ by right translations to be locally rigid. We also discuss some possible applications of this sufficient condition

    Download full text (pdf)
    fulltext
  • Lind, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Stahre, Mattias
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Deinterleaving of radar pulses with batch processing to utilize parallelism2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The threat level (specifically in this thesis, for aircraft) in an environment can be determined by analyzing radar signals. This task is critical and has to be solved fast and with high accuracy. The received electromagnetic pulses have to be identified in order to classify a radar emitter. Usually, there are several emitters transmitting radar pulses at the same time in an environment. These pulses need to be sorted into groups, where each group contains pulses from the same emitter.

    This thesis aims to find a fast and accurate solution to sort the pulses in parallel. The selected approach analyzes batches of pulses in parallel to exploit the advantages of a multi-threaded Central Processing Unit (CPU) or a Graphics Processing Unit (GPU). Firstly, a suitable clustering algorithm had to be selected. Secondly, an optimal batch size had to be determined to achieve high clustering performance and to rapidly process the batches of pulses in parallel. A quantitative method based on experiments was used to measure clustering performance, execution time, system response, and parallelism as a function of batch sizes when using the selected clustering algorithm.

    The algorithm selected for clustering the data was Density-based Spatial Clustering of Applications with Noise (DBSCAN) because of its advantages, such as not having to specify the number of clusters in advance, its ability to find arbitrary shapes of a cluster in a data set, and its low time complexity. The evaluation showed that implementing parallel batch processing is possible while still achieving high clustering performance, compared to a sequential implementation that used the maximum likelihood method.An optimal batch size in terms of data points and cutoff time is hard to determine since the batch size is very dependent on the input data. Therefore, one batch size might not be optimal in terms of clustering performance and system response for all streams of data. A solution could be to determine optimal batch sizes in advance for different streams of data, then adapt a batch size depending on the stream of data.

    However, with a high level of parallelism, an additional delay is introduced that depends on the difference between the time it takes to collect data points into a batch and the time it takes to process the batch, thus the system will be slower to output its result for a given batch compared to a sequential system. For a time-critical system, a high level of parallelism might be unsuitable since it leads to slower response times.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 10:00 U1, Stockholm
    Tang, Jiexiong
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Deep Learning Assisted Visual Odometry2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The capabilities to autonomously explore and interact with the environmenthas always been a greatly demanded capability for robots. Varioussensor based SLAM methods were investigated and served for this purposein the past decades. Vision intuitively provides 3D understanding of the surroundingand contains a vast amount of information that require high levelintelligence to interpret. Sensors like LIDAR, returns the range measurementdirectly. The motion estimation and scene reconstruction using camera is aharder problem. In this thesis, we are in particular interested in the trackingfrond-end of vision based SLAM, i.e. Visual Odometry (VO), with afocus on deep learning approaches. Recently, learning based methods havedominated most of the vision applications and gradually appears in our dailylife and real-world applications. Different to classical methods, deep learningbased methods can potentially tackle some of the intrinsic problems inmulti-view geometry and straightforwardly improve the performance of crucialprocedures of VO. For example, the correspondences estimation, densereconstruction and semantic representation.

    In this work, we propose novel learning schemes for assisting both directand in-direct visual odometry methods. For the direct approaches, weinvestigate mainly the monocular setup. The lack of the baseline that providesscale as in stereo has been one of the well-known intrinsic problems inthis case. We propose a coupled single view depth and normal estimationmethod to reduce the scale drift and address the issue of lacking observationsof the absolute scale. It is achieved by providing priors for the depthoptimization. Moreover, we utilize higher-order geometrical information toguide the dense reconstruction in a sparse-to-dense manner. For the in-directmethods, we propose novel feature learning based methods which noticeablyimprove the feature matching performance in comparison with common classicalfeature detectors and descriptors. Finally, we discuss potential ways tomake the training self-supervised. This is accomplished by incorporating thedifferential motion estimation into the training while performing multi-viewadaptation to maximize the repeatability and matching performance. We alsoinvestigate using a different type of supervisory signal for the training. Weadd a higher-level proxy task and show that it is possible to train a featureextraction network even without the explicit loss for it.

    In summary, this thesis presents successful examples of incorporating deeplearning techniques to assist a classical visual odometry system. The resultsare promising and have been extensively evaluated on challenging benchmarks,real robot and handheld cameras. The problem we investigate is stillin an early stage, but is attracting more and more interest from researcher inrelated fields.

    Download full text (pdf)
    main.pdf
  • Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Favero, Martina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimates of the proportino of SARS-CoV-2 infected individuals in SwedenManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper a Bayesian SEIR model is studied to estimate the

    proportion of the population infected with SARS-CoV-2, the virus responsi-

    ble for COVID-19. To capture heterogeneity in the population and the eect

    of interventions to reduce the rate of epidemic spread, the model uses a time-

    varying contact rate, whose logarithm has a Gaussian process prior. A Poisson

    point process is used to model the occurrence of deaths due to COVID-19 and

    the model is calibrated using data of daily death counts in combination with

    a snapshot of the the proportion of individuals with an active infection, per-

    formed in Stockholm in late March. The methodology is applied to regions in

    Sweden. The results show that the estimated proportion of the population who

    has been infected is around 13:5% in Stockholm, by 2020-05-15, and ranges be-

    tween 2.5%-15.6% in the other investigated regions. In Stockholm where the

    peak of daily death counts is likely behind us, parameter uncertainty does not

    heavily inuence the expected daily number of deaths, nor the expected cumu-

    lative number of deaths. It does, however, impact the estimated cumulative

    number of infected individuals. In the other regions, where random sampling

    of the number of active infections is not available, parameter sharing is used

    to improve estimates, but the parameter uncertainty remains substantial.

    Download full text (pdf)
    fulltext
  • Lundberg, David
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering.
    Policyanalys åt Klimatpolitiska rådet: Kortsiktsscenarier över Sveriges territoriella växthusgasutsläpp som verktyg för att utvärdera politik2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a tool is constructed with the aim of assessing the pace with which Swedish greenhouse gas emissions are reduced. It is done by constructing what-if-scenarios. The scenarios answer the question If current policies are kept, the economy grows according to forecasts by the National Institute of Economic Research and no external perturbations take place, what would happen to Swedish emissions of greenhouse gases? A linear regression is drawn for the coming four years in a scenario. This line shows the pace of emissions reductions and can easily be compared to the pace prescribed by the linear indicative emission curves that the Swedish climate progress in the non-trading sector should be evaluated against according to the Climate law. The goals and pace of emissions reductions are also applied to the trading sector to enable easy evaluation of total Swedish emissions. The tool is meant to be easy to use and should utilize readily available and regularly updated data. Short-term forecasts of energy use from the Swedish Energy Agency have been used as input for fuels and make up the bulk of the work. The results of the tool come with uncertainty, but as it is not a forecast or meant to show details about Swedish progress in reducing greenhouse gas emissions, but rather serve as a recurring and over-arching feedback it is assessed to be of interest.

    The short-term scenario for 2019 is constructed and shows a yearly decrease in emissions of 550 kton CO2e between 2018 and 2022. It is compared to the indicative emission curve which prescribes a decrease of 1400 kton per year. This indicates there is room for increased ambitions in the work of reducing Swedish emissions.

    Download full text (pdf)
    fulltext
  • Lindholm, Anton
    KTH, School of Architecture and the Built Environment (ABE), Architecture.
    Lost in translation2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the threshold between the analog and digital realms through various investigation of theories and methods. My interest in this subject came as a result when reflecting upon my 5 years at KTH, describing a gradual transition from analog to digital. This raised questions of the relevance of analog in an otherwise digital reality. The aim of this project was never defined in advance, instead a selection of questions and observations emerged as a result. The intension was never to declaim one or the other but rather to investigate in new possibilities connected to its use.

    Download full text (pdf)
    fulltext
  • Lundberg, Rasmus
    KTH, School of Architecture and the Built Environment (ABE), Architecture.
    Kontextuell helhet av 3D-printad träullsandwich - Från prefab till printning in-situ2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The project aims to propose a direction for how additive manufacturing methods can influence architecture, to study the techniques and find out which direction could be perceived as most rewarding or viable. How to use the potential of the new technology in a good way? I have tried to develop a product that utilizes the potential of the additive manufacturing methods and which is conceivable for full-scale realization in the construction sector in the near future. The product consists of a method for producing long lasting sandwich constructions with high wood content. The method reduces the building industry's climate impact and can provide great spatial qualities and design possibilities. Through physical experiments and exploration of various digital fabrication methods, I have tried to visualize and identify possibilities with these new technological aids. Through practical tests, I have tested my ideas of how these methods can be used effectively. The project was expanded from initially studying additive production methods to, later during the application phase, also include digital aids such as photogrammetry and tools for parametric design. The project has resulted in a strategy for printing cellulose-based sandwich constructions in printed molds of recyclable biocomposite.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-08-21 12:00 FD5, Stockholm
    Kördel, Mikael
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Biological Laboratory X-Ray Microscopy2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Soft x-ray microscopy in the water window (𝜆 ≈ 2.3 − 4.3 nm) is a powerful technique for high-resolution biological imaging. The strong natural contrast between carbon-based structures and water allows visualization of hydrated and unstained samples, while providing enough transmission through up to ∼ 10 μm of organic matter. Furthermore, the full potential of this technique can be exploited by performing computed tomography, thus obtaining a complete 3D image of the object.

    Routine short-exposure water-window microscopy of whole cells and tissue is currently performed at synchrotron-radiation facilities around the world, but with a limited accessibility to the wider research community. For this reason, laboratory-based systems have been developed, which are now reaching maturity. The benefits compared to the synchrotron-based instruments include easier integration with complementary methods in the home laboratory, in addition to the increased access that allows for the often time-consuming optimization of experimental parameters as well as longitudinal studies.

    This Thesis presents recent developments of the Stockholm laboratory x-ray microscope as well as several biological applications. Work has been done on improving the mechanical and thermal stability of the microscope, resulting in a resolution of 25 nm (half period) in images of test targets. The biological applications were enabled by a significantly increased x-ray flux through the system as well as an improved operational stability. This work demonstrates 10-second exposure imaging of whole cryofixed cells, imaging of viral infections in cells, and 20 minutes total exposure cryotomography.

    Download full text (pdf)
    Kördel_thesis_2020
  • Borpatra Gohain, Prakash
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Jansson, Magnus
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Relative cost based model selection for sparse high-dimensional linear regression models2020In: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, p. 5515-5519Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a novel model selection method named multi-beta-test (MBT) for the sparse high-dimensional linear regression model. The estimation of the correct subset in the linear regression problem is formulated as a series of hypothesis tests where the test statistic is based on the relative least-squares cost of successive parameter models. The performance of MBT is compared to existing model selection methods for high-dimensional parameter space such as extended Bayesian information criterion (EBIC), extended Fisher Information criterion (EFIC), residual ratio thresholding (RRT) and orthogonal matching pursuit (OMP) with a priori knowledge of the sparsity. Simulation results indicate that the performance of MBT in identifying the true support set surpasses that of EBIC, EFIC and RRT in certain regions of the considered parameter settings.

    Download full text (pdf)
    fulltext
  • Bergman, Anton
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Machine Design (Div.).
    Eriksson, Robin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Machine Design (Div.).
    Grahn, Lars-Fredrik
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Machine Design (Div.).
    Self-levelling Platform Concept for a Winch-based, Single Point Absorbing, Wave Energy Converter2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This report covers a bachelor thesis project to design a concept for a levelling system to a point absorbing wave energy converter that uses a winch with a chain, which has restricted capabilities to bending and thus requires a system which compensates for this. First of all, a literature study was made to see if there were any technologies that could be used, and also a wide search for information about the wave conditions in the Baltic sea were performed to find what requirements would be necessary for the concept to be able to withstand the conditions faced there. Following this, several brainstorming sessions were had to get ideas for different types of constructions that could solve the problem. After multiple ideas had been conceptualized, they were rated in a Pugh matrix with five different criteria which were: 1. mechanical complexity 2. complexity of required motion control 3. complexity of the structure 4. amount of potential critical weak points 5. mass of the system and lastly 6. how symmetrical it could be made. The concept that was deemed most viable out of all them is a cradle that holds the winch-drum and is controlled by a motor to compensate for one angular shift, and this is paired with a mooring system that limits the yawing motion of the entire buoy and thus removes the need for the compensation of that angle. This concept was then modelled in Solid Edge and following this; a stress analysis was made to determine the forces that would act upon the system. These were then used to determine whether the system would live up to the requirements or not with fatigue calculations. Lastly a list of recommended future work is presented.

    Download full text (pdf)
    fulltext
  • Arshad, Fawwaz Ahmad
    KTH, School of Engineering Sciences (SCI), Physics.
    The Effects of Patchy Connectivity on Pattern Formation in Biological Neuronal Networks2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    fulltext
  • Matic, Gabrijella
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development.
    Larsen, Johan
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development.
    State of the Art Rapport inom laserskärning vid bearbetning av aluminium och rostfritt stål2020Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This State of the Art report deals with the subject of the method laser cutting of aluminum and stainless steel. The report gives answers to what has been written the last decade within the area of aluminum respectively stainless steel and which differences can be found regarding the material characteristics after processing them through laser cutting. The report contains several analyses and summaries of the articles written on the subject. The analyses of the used articles show a number of different results in different categories. The categories are as follows: optimization, effectivity, impact of the materials and nuclear decommissioning. Optimization within aluminum explore how waste material is supposed to obtain the best possible quality. Optimization within the area of stainless steel examine the procedure making it possible to establish a way to calculate the cost of operation or improve the cutting process. Efficiency within the area of aluminum is about making the process of cutting more effective. Effectivity within the area of stainless steel is instead about making the performance, quality and consumption of assisting gas more effective. It is shown in the category impact of the material how deeply the materials are affected and how the processed surface reacts to the process. Within the material stainless steel and the category nuclear decommissioning they have investigated if it could be implemented in the field to ensure that the process works. This State of the Art contains a matrix of the analyzed articles to provide an overview of the subject.

    Download full text (pdf)
    fulltext
  • Colonel-Bertrand, Gauthier Pierre-Antoine
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Energy and Climate Studies, ECS.
    Modelling of the Hong Kong Power System by 20302020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Hong Kong is a semi-autonomous region of the People’s Republic of China. As a former British colony on the South China Sea, it enjoyed early exposure to international trade. Hong Kong now features a developed liberal economy largely based on financial services. It is also densely populated and features little indigenous energy resources. Currently, its power sector is 75% reliant on imported fossil fuels, with the remaining 25% being imported from a nuclear power plant in Mainland China. Renewables mostly consist in small-scale innovative pilot projects or embedded solar systems. For these reasons, the region faces strong challenges with respect to air pollution, energy autonomy, dependence on fossil fuels and exposure to climate change. Although Hong Kong is under the Nationally Determined Contribution of the People’s Republic of China, it has the competence to design its own energy policy. It recently adopted a climate action plan aiming at bringing the share of gas-fired power up to 50% of the mix by 2030 (against 27% in 2015) while bringing coal-fired power down to 25% (against 48% in 2015), as well as setting the framework for renewables to develop. This study focuses on period 2016-2030 and uses the Long-Range Energy Alternatives Planning (LEAP) tool to model the power system in the region. Possible scenarios are developed to assess the economic and environmental impacts of enhancing clean electricity generation and energy security on the future electricity system. “Business as usual” (BAU) extends the current trends with respect to socioeconomic indicators, energy demand, new power plants, and power plant retirements. “Climate action plan” (CAP) studies the trajectory proposed by the Government. “High renewables share” (HRS) explores how much renewables Hong Kong could incorporate in the power generation mix. “Fossil-free electricity” (FFE) questions how much more local resources Hong Kong would need for a fossil-free power system. Finally, “No reliance on Mainland China” (NRMC), explores the dependence of Hong Kong on Mainland China by modelling a hypothetical cut-off from supplies of power and fuel. Results shows that Hong Kong is well on track to meet its policy commitments, partly because they are rather conservative and lacking ambition. It is also established that there is sufficient area for renewable resources (solar PV, offshore wind, and waste-to-energy) to account for up to 30% of power supply – particularly in the current context of decreasing power demand. The low level of penetration of renewables is found to be caused by a lack of incentives to utility companies rather than a space constraint. Regarding energy security, a trade-off is found between energy independence and environmental sustainability; Hong Kong will soon have to choose between covering its energy needs global LNG markets, or maintaining imports of low-carbon nuclear power from the Chinese mainland. The cost-sustainability trade-off is also discussed. Scenario “Climate action plan” is found able to abate greenhouse gas emissions by 2% with respect to “Business as usual” while costing 3% more on the period of interest. However, the more ambitious “High renewables share” is found to abate greenhouse gas emissions by 10% while costing 22% more than “Business as usual”.

    Download full text (pdf)
    fulltext
  • Petersson, Albert
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology.
    Mobility-as-a-Service and Electrification of Transport: A Study on Possibilities and Obstacles for Mobility-as-a-Service in Stockholm and Implications for Electrification of Vehicles2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Increasing urbanization drives the need for cities to make transport more efficient, both to meet climate goals as well as creating an attractive living environment for its residents, with less congestion, noise and local pollution. As vehicles are increasingly electrified, further innovations will be needed in order to meet environmental, social and economic sustainability targets, and a more efficient use of vehicles and public transport is central in this endeavor. As new generations are increasingly multimodal and digitalization opens up for innovative concepts, the possibilities for innovations to meet these challenges increases. Against this backdrop, Mobility as a Service (MaaS) has emerged as a concept with the potential to increase sustainability and mobility in cities. MaaS is based on the goal to challenge private car ownership by gathering different mobility services in one application, thereby creating a service offering with the potential to be more attractive than car-ownership. However, many varieties of MaaS exists and intensive efforts are ongoing to understand how MaaS can work in practice. MaaS has been described as a phenomenon with potential to radically change how people move in the future. The purpose of this thesis has been to understand how MaaS can affect the electrification of sustainable cities, with a focus on e-mobility. Starting off by trying to understand barriers to increased car-sharing in Stockholm, possibilities and challenges for MaaS in the city is discussed and its potential effects on emobility there. Data has been collected through continuous evaluation and review of literature and conducted interviews with actors and stakeholders within traffic and sustainable mobility in Stockholm. The results have been analyzed from a sustainable innovation perspective to discuss opportunities and challenges for the development of MaaS and its impact on electrification of vehicles. Collected empirics indicate that Stockholm has good opportunities for facilitating MaaS in the future, mainly due to accessible and extensive public transport (PT). The success of MaaS largely depends on the understanding of the service among consumers and why increased attention and marketing is important. An actor’s logic in individually owning the customer contact to be able to improve a service offering can be an obstacle to the growth of future mobility platforms. This underlines the need for cooperation between involved players to create momentum for MaaS. At the same time, MaaS benefits from a wide range of underlying mobility services and progressive traffic planning. In this regard, the results indicate that there are a number of different instruments at a macro level that could facilitate the sharing services to be developed. A legal definition of car sharing is a first step to facilitate measures to stimulate MaaS. Measures that smooth the relationship between the private car and car sharing services can create momentum for these, for example through exemptions from congestion tax. Access to parking at reasonable costs appears to be a key enabler for flexible shared mobility services in the future, partly because it currently accounts for a large part of the costs, and partly because the degree of flexibility and accessibility is determined by access to parking. At the same time, tougher parking regulations for BRFs and companies have also created a market for mobility services. Long term, there is a consensus that the future of transport is electrified. However, the impact of shared services and MaaS is highly shaped by the technical development of vehicles. Ongoing electrification of vehicles highlights the need for charging infrastructure deployed at locations that fits the need of shared services. Also, the charging equipment and solutions has to be developed to fit the needs of shared services and have a shared customer in mind.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-15 14:00 Stockholm
    Khodaei, Mohammad
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    The Key to Intelligent Transportation Systems: Identity and Credential Management for Secure and Privacy-Preserving Vehicular Communication Systems2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Vehicular Communication (VC) systems can greatly enhance road safety and transportation efficiency and enable a variety of applications providing traffic efficiency, environmental hazards, road conditions and infotainment. Vehicles are equipped with sensors and radars to sense their surroundings and external environment, as well as with an internal Controller Area Network (CAN) bus. Hence, vehicles are becoming part of a large-scale network, the so-called Internet of Vehicles (IoV). Deploying such a large-scale VC system cannot materialize unless the VC systems are secure and do not expose their users’ privacy. On the one hand, vehicles could be compromised or their sensors become faulty, thus disseminating erroneous information across the network. Therefore, participating vehicles should be held accountable for their actions and credentials (their Long Term Certificates (LTCs) and their pseudonyms) can be efficiently revoked and disseminated in a timely manner throughout a large-scale (multi-domain) VC system. On the other hand, user privacy is at stake: according to standards, vehicles should disseminate spatio-temporal information frequently, e.g., location and velocity. Due to the openness of the wireless communication, an observer can eavesdrop the vehicular communication to infer users’ sensitive information, and possibly profile users based on different attributes, e.g., trace their commutes and identify home/work locations. The objective is to secure the communication, i.e., prevent malicious or compromised entities from affecting the system operation, and ensure user privacy, i.e., keep users anonymous to any external observer but also for security infrastructure entities and service providers. This is not very straightforward because accountability and privacy, at the same time, appear contradictory. 

    In this thesis, we first focus on the identity and credential management infrastructure for VC systems, taking security, privacy, and efficiency into account. We begin with a detailed investigation and critical survey of the standardization and harmonization efforts, along with industrial projects and proposals. We point out the remaining challenges to be addressed in order to build a central building block of secure and privacy-preserving VC systems, a Vehicular Public-Key Infrastructure (VPKI). Towards that, we provide a secure and privacy-preserving VPKI design that improves upon existing proposals in terms of security and privacy protection and efficiency. More precisely, our scheme facilitates multi-domain operations in VC systems and enhances user privacy, notably preventing linking of pseudonyms based on timing information and offering increased protection in the presence of honest-but-curious VPKI entities. We further extensively evaluate the performance, i.e., scalability, efficiency, and robustness, of the full-blown implementation of our VPKI for a large-scale VC deployment. We provide tangible evidence that it is possible to support a large area of vehicles by investing in modest computing resources for the VPKI entities. Our results confirm the efficiency, scalability and robustness of our VPKI.

    As a second main contribution of this thesis, we focus on the distribution of Certificate Revocation Lists (CRLs) in VC systems. The main challenges here lie exactly in (i) crafting an efficient and timely distribution of CRLs for numerous anonymous credentials, pseudonyms, (ii) maintaining strong privacy for vehicles prior to revocation events, even with honest-but-curious system entities, (iii) and catering to computation and communication constraints of on-board units with intermittent connectivity to the infrastructure. Relying on peers to distribute the CRLs is a double-edged sword: abusive peers could "pollute" the process, thus degrading the timely CRLs distribution. We propose a vehicle-centric solution that addresses all these challenges and thus closes a gap in the literature. Our scheme radically reduces CRL distribution overhead: each vehicle receives CRLs corresponding only to its region of operation and its actual trip duration. Moreover, a "fingerprint" of CRL ‘pieces’ is attached to a subset of (verifiable) pseudonyms for fast CRL ‘piece’ validation (while mitigating resource depletion attacks abusing the CRL distribution). Our experimental evaluation shows that our scheme is efficient, scalable, dependable, and practical: with no more than 25 KB/s of traffic load, the latest CRL can be delivered to 95% of the vehicles in a region (15x15 KM) within 15s, i.e., more than 40 times faster than the state-of-the-art. Overall, our scheme is a comprehensive solution that complements standards and can catalyze the deployment of secure and privacy-protecting VC systems. 

    As the third main contribution of the thesis, we focus on enhancing location privacy protection: vehicular communications disclose rich information about the vehicles and their whereabouts. Pseudonymous authentication secures communication while enhancing user privacy. To enhance location privacy, cryptographic mix-zones were proposed to facilitate vehicles covertly transition to new ephemeral credentials. The resilience to (syntactic and semantic) pseudonym linking (attacks) highly depends on the geometry of the mix-zones, mobility patterns, vehicle density, and arrival rates. Our experimental results show that an eavesdropper could successfully link 73% of pseudonyms (during non-rush hours) and 62% of pseudonyms (during rush hours) after vehicles change their pseudonyms in a mix-zone. To mitigate such inference attacks, we present a novel cooperative mix-zone scheme that enhances user privacy regardless of the vehicle mobility patterns, vehicle density, and arrival rate to the mix-zone. A subset of vehicles, termed relaying vehicles, are selected to be responsible for emulating non-existing vehicles. Such vehicles cooperatively disseminate decoy traffic without affecting safety-critical operations: with 50% of vehicles as relaying vehicles, the probability of linking pseudonyms (for the entire interval) drops from 68% to 18%. On average, this imposes 28 ms extra computation overhead, per second, on the Roadside Units (RSUs) and 4.67 ms extra computation overhead, per second, on the (relaying) vehicle side; it also introduces 1.46 KB/sec extra communication overhead by (relaying) vehicles and 45 KB/sec by RSUs for the dissemination of decoy traffic. Thus, user privacy is enhanced at the cost of low computation and communication overhead.

    Download full text (pdf)
    dissertation-fulltext
  • Public defence: 2020-06-15 09:30 https://kth-se.zoom.us/s/69079443003
    Jin, Hongyu
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Cooperative Privacy and Security for Mobile Systems2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The growing popularity of powerful mobile devices, along with increased computation and storage of computing infrastructure, opened possibilities for versatile mobile system applications. Users, leveraging sensing capabilities of the devices, can collect rich data and exchange the data with diverse Service Providers (SPs) or their close neighboring devices. Provision of such user status awareness to the involved system entities, can facilitate customized user experience for system participants.

    Nonetheless, the open and decentralized nature of mobile systems raise concerns on both security and privacy of users and the system infrastructure. Sensitive user data could be exposed to honest-but-curious entities, which can further process data to profile users. At the same time, compromised system entities can feed faulty data to disrupt system functionalities or mislead users. Such issues necessitate secure and privacy-enhancing mobile systems, while not compromising the quality of service the systems provide to their users. More specifically, the solutions should be efficient and scale as the system grows, and resilient to both external and internal adversaries. This thesis considers two mobile system instances: Location-based Services (LBSs) and Vehicle-to-Vehicle (V2V) safety applications. We address security and privacy in a cooperative manner, relying on cooperation among the users to protect themselves against the adversaries. Due to the reliance on peers, input from the peers should be examined, in order to ensure the reli- ability of the applications. We adapt pseudonymous authentication, designed for Vehicular Communication (VC) systems, and integrate it with LBSs. This protects user privacy and holds users accountable for their actions, which are non-repudiable. At the same time, our scheme prevents malicious nodes from aggressively passing on bogus data. We leverage redundancy of shared data from multiple cooperating nodes to detect potential conflicts. Any conflict triggers proactive checking on the data with the authoritative entity that reveals the actual misbehaving users. For V2V safety applications, we extend safety beacons, i.e., Cooperative Awareness Messages (CAMs), to share signature verification effort, for more efficient message verification. Similarly to the LBSs, redundancy of such piggybacked claims is also key for remedying malicious nodes that abuse this cooperative verification. In addition, the extended beacon format facilitates verification of event-driven messages, including Decentralized Environmental Notification Messages (DENMs), leveraging proactive authenticator distribution.

    We qualitatively and quantitatively evaluate achieved security and privacy protection. The latter is based on extensive simulation results. We propose a location privacy metric to capture the achieved protection for LBSs, taking into consideration the pseudonymous authentication. The performance of the privacy-enhancing LBS is experimentally evaluated with the help of an implementation on a small scale automotive computer testbed. We embed processing delays and queue management for message processing in simulations of V2V communication, to show scalability and efficiency of the resilient V2V communication scheme. The results confirm the resilience to both internal and external adversaries for the both systems.

    Download full text (pdf)
    fulltext
  • Agerbeg, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical Learning and Analysis on Homology-Based Features2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Stable rank has recently been proposed as an invariant to encode the result of persistent homology, a method used in topological data analysis. In this thesis we develop methods for statistical analysis as well as machine learning methods based on stable rank. As stable rank may be viewed as a mapping to a Hilbert space, a kernel can be constructed from the inner product in this space. First, we investigate this kernel in the context of kernel learning methods such as support-vector machines. Next, using the theory of kernel embedding of probability distributions, we give a statistical treatment of the kernel by showing some of its properties and develop a two-sample hypothesis test based on the kernel. As an alternative approach, a mapping to a Euclidean space with learnable parameters can be conceived, serving as an input layer to a neural network. The developed methods are first evaluated on synthetic data. Then the two-sample hypothesis test is applied on the OASIS open access brain imaging dataset. Finally a graph classification task is performed on a dataset collected from Reddit.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-15 10:00 https://kth-se.zoom.us/webinar/register/WN_UXIld9K6RkO3KR7fU9FQig
    Korkovelos, Alexandros
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Energy Systems Analysis.
    Advancing the state of geospatial electrification modelling: New data, methods, applications, insight and electrification investment outlooks2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Globally, it is estimated that there are approximately 860 million people without access to electricity. Achieving universal electricity access over the next decade – as part of Sustainable Development Goal 7 – indicates that many countries will soon need to set in place roadmaps, action plans and policy for ramping-up electrification. The challenge is significant. It requires the motivation of considerable financial resources so that electricity can reach poor, rural populations in least developed areas. 

     

    A look back at history however, reveals that such a ramp-up of electrification activity is not unprecedented. Many countries in the “Global North” have faced similar challenges about a century ago. Past examples indicate that electrification planning – and ensuing policy – can take different shapes based on underlying social, technological, economic and political conditions. This brings forward the importance of considering inputs that reflect these conditions. It also highlights the need for reliable data and information that best describe the local context (e.g. resource availability, distribution of population, economic activities or infrastructure). While advancements in geo-spatial information technology have greatly improved the availability of such information in the past years, their use in electrification planning is not fully exploited. 

     

    This dissertation aims to advance the state of geospatial electrification modelling by demonstrating new data, methods, applications and insights over the course of four academic papers covering three research questions. 

    The first question searches for common – across different times and geographies – patterns, policy dilemmas and constraints related to electrification, the reading of which can shed light on current and future electrification planning activities. In response, paper I takes a retrospective look into the electrification challenge in the United States of America, the United Kingdom, Sweden and China and examines strategies, success stories and failures in each case. Results unveil key lessons regarding the development phases of electrification - with a focus on the role of isolated, small mini-grids. 

     

    The second question asks whether the use of geospatial information technology can introduce new data and methods into an existing modelling framework (e.g. OnSSET) and help tackle electrification planning dilemmas. In response, paper II leverages new open access datasets to provide spatially explicit estimates of small-scale hydropower potential in Sub-Saharan Africa. Paper III demonstrates twenty-six new, updated or missing datasets, the processing of which allows new angles of analysis over electrification planning.

     

    The third research question focuses on how the OnSSET modelling framework can be improved, open sourced and scaled so as to allow a broader audience develop fast, informative, country and context specific electrification investment strategies. Here, papers III and IV, leverage OnSSET’s modular structure, calibrate its functions and develop customized electrification investment outlooks for Malawi and Afghanistan respectively. These, explore different scenarios tuned according to the policy challenges in each country (e.g. gradual electrification in Malawi or planning under conflict risk in Afghanistan). Moreover, this dissertation has expanded OnSSET’s application range as part of the Global Electrification Platform (GEP). The GEP is an open access, collaborative environment that now hosts 216 electrification investment scenarios (together with underlying input data and models) for 59 countries worldwide, thus improving the transparency surrounding their review, reproduction or replication by a broader audience.​

     

     

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 09:00 https://kth-se.zoom.us/j/62560073937
    Remnestål, Julia
    KTH, Centres, Science for Life Laboratory, SciLifeLab. KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Affinity Proteomics.
    Dementia Proteomics2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The term dementia encompass a number of conditions arising as a consequence of tissue degeneration in the brain. This degeneration is caused by molecular events occurring on a cellular level including inflammation, defective waste disposal and accumulation of insoluble proteins and peptides. Many of these molecular events are in turn also reflected in the composition of the cerebrospinal fluid (CSF) which circulates within and around the brain. This thesis summarise five studies conducted with the aim to explore and profile CSF proteins in the context of dementia and other neurodegenerative disorders. Protein profiles were obtained by so-called suspension bead arrays (SBAs), created by coupling antibodies to color-coded microspheres, allowing detection of more than 350 CSF proteins simultaneously. The majority of the explored proteins are referred to as brain-enriched, entailing that the corresponding genes are highly expressed in brain tissue in comparison to other tissues.

     

    In Paper I, the SBA technology was utilised to profile about 280 proteins in CSF from several neurodegenerative disorders, i.e. Alzheimer’s disease (AD), dementia with Lewy Bodies and Parkinson’s disease. Distinct differences in the CSF proteome were identified depending on site of collection (ventricular or lumbar) and time point (post mortem or ante mortem). Disease-associated profiles for the two synaptic proteins neuromodulin (GAP43) and neurogranin (NRGN) could be confirmed, in which both proteins displayed higher levels in AD compared to controls. High levels of the two proteins were furthermore observed in patients at preclinical stages of AD in two independent cohorts. To verify the identified protein profiles, parallel reaction monitoring (PRM) assays were developed for 17 proteins in Paper II, including GAP43. Eight proteins displayed concordance to data generated with SBAs and among these were GAP43, cholecystokinin, neurofilament medium chain (NF-M), leucine-rich alpha-2-glycoprotein and vascular cell adhesion protein 1. 

     

    In Paper III, the SBA technology was again applied to characterise early dementia-related changes in the CSF proteome by comparing samples from individuals with mild cognitive impairment (MCI), controls and AD patients in two independent cohorts. The MCI individuals were moreover stratified based on CSF concentration of the core AD biomarkers Aβ42 and tau. The six proteins amphiphysin, aquaporin 4, cAMP regulated phosphoprotein 21, β-synuclein, GAP43 and NF-M did all show significant differences between sample groups in both cohorts. Further exploration of how the pathological processes preceding dementia affect the CSF proteome, was done by analysis of 104 brain-enriched proteins in CSF from asymptomatic 70 year-olds in Paper IV. Protein profiles were correlated to Aβ42, t-tau and p-tau CSF concentration, revealing a large number of proteins displaying significant correlations to tau levels. Upon dividing the asymptomatic individuals based on Aβ42 CSF pathology, some proteins showed significantly different associations in the two groups. Most of these proteins yielding interesting profiles, were plasma membrane proteins or proteins connected to synaptic vesicle transport.

     

    While AD is the most common form of dementia, accounting for more than 60 % of all cases worldwide, frontotemporal dementia (FTD) is the most frequently occurring form of young-onset dementia. In Paper V, CSF protein profiles were explored in the context of FTD. Patients with behavioural variant FTD and primary progressive aphasia, were compared to unaffected individuals with a high risk of developing FTD. Proteomic differences between patients with FTD and the unaffected individuals were observed already at a global level, and particularly for the six proteins NF-M, neurosecretory protein VGF, neuronal pentraxin receptor, prodynorphin, transmembrane protein 132D and tenascin-R.

     

    The disease-associated profiles identified in the presented studies provide a basis for future research within dementia proteomics. Whether the proteins identified will have the possibility to aid in clinical diagnosis, prognosis or characterisation of dementia, remains to be evaluated. Given the fortunate situation, especially in Sweden, with access to large and well characterised CSF collections, there are ample opportunities for future proteomic studies to elucidate the true potential of these proteins.

    Download full text (pdf)
    Dementia Proteomics Kappa Remnestaal
  • Public defence: 2020-06-12 14:00 Publikt via ZOOM
    Szipka, Károly
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Uncertainty Management for Automated Diagnostics of Production Machinery2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Neither production machinery, nor production systems will ever become completely describable or predictable. This results in the continuous need for monitoring and diagnostics of such systems in order to manage related uncertainties. In advanced production systems uncertainty has to be the subject to a systematic management process to maintain machine health and improve performance. Automation of diagnostics can fundamentally improve this management process by providing an affordable and scalable information source. In this thesis, the important aspects of uncertainty management in production systems are established and serve as a basis for the composition of an uncertainty-based machine diagnostics framework. The proposed framework requires flexible, fast, integrated and automated diagnostics methods. An inertial measurement-based test method is presented in order to satisfy these requirements and enable automated measurements for diagnostics of production machinery. The gained insights and knowledge about production machine health and capability improve transparency, predictability and dependability of production machinery and production systems. These improvements lead to increased overall equipment effectiveness and higher level of sustainability in operation.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 13:00 FB42, Stockholm
    Capel, Francesca
    KTH, School of Engineering Sciences (SCI), Physics, Particle and Astroparticle Physics.
    Cosmic clues from astrophysical particles2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Ultra-high-energy cosmic rays (UHECRs) are charged particles that have been accelerated to extreme energies, such that they are effectively travelling at the speed of light. Interactions of these particles with the Earth’s atmosphere lead to the development of extensive showers of particles and radiation that can be measured with existing technology. Despite decades of research, the origins of UHECRs remain mysterious. However, they are thought to be accelerated within powerful astrophysical sources that lie beyond the borders of our Galaxy. This thesis explores different ideas towards the common goal of reaching a deeper understanding of UHECR phenomenology. Part I concerns the development of a novel space-based observatory that has the potential to detect unprecedented numbers of these enigmatic particles. The feasibility of such a project is demonstrated by the results from the Mini-EUSO instrument, a small ultraviolet telescope that is currently on-board the International Space Station. In Part II, the focus is on fully exploiting the available information with advanced analysis techniques to close the gap between theory and data. UHECRs are closely connected to the production of neutrinos and gamma rays, so frameworks for the joint analysis of these complementary cosmic messengers are also developed. The results presented herein demonstrate that to progress, it is crucial to invest in the development of both detection and analysis techniques. By taking a closer look at the existing data, new clues can be revealed to reach a more comprehensive understanding and better inform the design of future experiments. 

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 10:00 Zoom
    Berggren, Tomas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On determinantal point processes and random tilings with doubly periodic weights2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is dedicated to asymptotic analysis of determinantal point processes originating from random matrix theory and random tiling models. Our main interest lies in random tilings of planar domains with doubly periodic weights.

    Uniformly distributed random tiling models are known to be a very rich class of models where many interesting phenomena can be observed. These models have therefore been under investigation for many years and many aspects of the models are by now well understood. Random tiling models with doubly periodic weights are in fact an even richer class of models. However, these models are much more difficult to analyze and for a thorough study of their behavior new ideas are needed. This thesis increases the understanding of random tiling models with doubly periodic weights.

    The thesis consists of three papers and two chapters; one introductory and background chapter and one chapter giving an overview of the papers.

    Paper A deals with linear statistics of the thinned Circular Unitary Ensemble and the thinned sine process. The thinning creates a transition from the Circular Unitary Ensemble respectively sine process to the Poisson process. We study a part of these transitions in detail.

    In Papers B and C we study random tiling models with doubly periodic weights. These two papers constitute the main contribution of this thesis.

    In Paper B we give a general method how to analyze a large family of random tiling models. In particular, we provide a double integral formula for the correlation kernel in terms of a Wiener-Hopf factorization of an associated matrix-valued function. We also present a recursive method on how to construct the Wiener-Hopf factorization.

    The method developed in Paper B is used in Paper C to analyze the 2×k-periodic Aztec diamond. More precisely, we derive the correlation kernel for the Aztec diamond of finite size and give a detailed description of the model as the size tends to infinity.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-05 10:00 https://kth-se.zoom.us/s/68861340458
    Berglund, Emelie
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Gene Technology.
    Molecular and Spatial Profiling of Prostate Tumors2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Every cancer tumor is unique, with characteristics that change over time. The evolution of a full-blown malignancy from a single cell that gives rise to a heterogeneous population of cancer cells is a complex process. The use of spatial information makes a big contribution to understanding the progression of tumors and how patients respond to treatment. Currently, the scientific community is taking a step further in order to understand gene expression heterogeneity in the context of tissue spatial organization to shed light on cell- to-cell interactions. Technological advances in recent years have increased the resolution at which heterogeneity can be observed. Spatial transcriptomics (ST) is an in situ capturing technique that uses a glass slide containing oligonucleotides to capture mRNAs while maintaining the spatial information of histological tissue sections. It combines histology and Illumina sequencing to detect and visualize the whole transcriptome information of tissue sections. In Paper I, an AI method was developed to create a computerized tissue anatomy. The rich source of information enables the AI method to identify genetic patterns that cannot be seen by the naked eye. This study also provided insights into gene expression in the environment surrounding the tumor, the tumor microenvironment, which interacts with tumor cells for cancer growth and progression. In Paper II, we investigate the cellular response of treatment. It is well known that virtually all patients with hormone naïve prostate cancer treated with GnRH agonists will relapse over time and that the cancer will transform into a castration-resistant form denoted castration-resistant prostate cancer. This study shows that by characterizing the non-responding cell populations, it may be possible to find an alternative way to target them in the early stages and thereby decrease the risk of relapse. In Paper III, we deal with scalability limitations, which in the ST method are represented by time- consuming workflow in the library preparation. This study introduces an automated library preparation protocol on the Agilent Bravo Automated Liquid Handling Platform to enable rapid and robust preparation of ST libraries. Finally, Paper IV expands on the first work and illustrates the utility of the ST technology by constructing, for the first time, a molecular view of a cross-section of a prostate organ.

    Download full text (pdf)
    fulltext
  • Niewalda, Tobias
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, Rail Vehicles.
    Deep Learning Based Classification of Rail Defects Using On-board Monitoring in the Stockholm Underground2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this work is to find out if an artificial neural network can be useful purpose of this work is to find out if an artificial neural network can be useful in order to detect rail squats with the existing Quiet Track Measurement System (QTMS). Squats are surface-initiated rail defects which arise due to rolling contact fatigue. The monitoring system, installed on seven trains running on the green line in the Stockholm underground, aims to improve the maintenance process. The early detection and surveillance of defects helps to extend the service life of the tracks and reduce operating costs. An artificial neural network is used to analyse the the continuously recorded measurements, which consist of vertical bogie acceleration and surrounding noise, each sampled with a frequency of 22 kHz.In particular, the power spectral density as input for multi-layer Fully-connected Neural Network (FNN) has proven to be promising for accurate squat predictions. The supervised learning was carried out according to the one-vs-all principle, i.e. squats versus all other events. A two-hidden-layer FNN has finally been chosen to complement the QTMS. The usage of the full available frequency range from almost DC up to 11kHz, but minimum 7 kHz, allows good prediction with only low false prediction rates. When concatenating all six measurement channels to a single classifier input, an accuracy of over 96% for the squat class and up to 99.98% can in total be achieved. The chosen network type also showed high stability despite quite strong parameter variations and a massive under-representation of squat observations in the measurement data.However, since limited maintenance information about actual squats is available for labelling and testing, more evaluation is needed. The correct identification of mis-labelled squats indicates the high potentials of artificial neural networks.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-10 15:15 A5:1003, Stockholm
    Samuelsson, Filip
    KTH, School of Engineering Sciences (SCI), Physics, Particle and Astroparticle Physics. Oskar Klein Ctr Cosmoparticle Phys, SE-10691 Stockholm, Sweden..
    Multi-messenger emission from gamma-ray bursts2020Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Multi-messenger astronomy is a very hot topic in the astrophysical community. A messenger is something that carries information. Different astrophysical messenger types are photons, cosmic rays, neutrinos, and gravitational waves. They all carry unique and complementary information to one another. The idea with multi-messenger astronomy is that the more different types of messengers one can obtain from the same event, the more complete the physical picture becomes.

    In this thesis I study the multi-messenger emission from gamma-ray bursts (GRBs), the most luminous events known in the Universe. Specifically, I study the connection of GRBs to extremely energetic particles called ultra-high-energy cosmic rays (UHECRs). UHECRs have unknown origin despite extensive research. GRBs have long been one of the best candidates for the acceleration of these particles but a firm connection is yet to be made. In Paper I and Paper II, we study the possible GRB-UHECR connection by looking at the electromagnetic radiation from electrons that would also be accelerated together with the UHECR. My conclusion is that the signal from these electrons does not match current GRB observation, disfavoring that a majority of UHECRs comes from GRBs.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 09:30 D5, Stockholm
    Yu, Peng
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Power Safety.
    Modelling and Simulation of Reactor Pressure Vessel Failure during Severe Accidents2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis aims at the development of new coupling approaches and new models for the thermo-fluid-structure coupling problem of reactor pressure vessel (RPV) failure during severe accidents and related physical phenomena. The thesis work consists of five parts: (i) development of a three-stage creep model for RPV steel 16MND5, (ii) development of a thermo-fluid-structure coupling approach for RPV failure analysis, (iii) performance comparison of the new approach that uses volume loads mapping (VLM) for data transfer with the previous approach that uses surface loads mapping (SLM), (iv) development of a lumped-parameter code for quick estimate of transient melt pool heat transfer, and (v) development of a hybrid coupling approach for efficient analysis of RPV failure.

    A creep model called ‘modified theta projection model’ was developed for the 16MND5 steel so that it covers three-stage creep process. Creep curves are expressed as a function of time with five parameters  (i=1~4 and m) in the new creep model. A dataset for the model parameters was constructed based on the available experimental creep curves, given the monotonicity assumption of creep strain vs temperature and stress. New creep curves can be predicted by interpolating model parameters from this dataset, in contrast to the previous method that employs an extra fitting process. The new treatment better accommodates all the experimental curves over the wide ranges of temperature and stress loads. The model was implemented into the ANSYS Mechanical code, and its predictions successfully captured all three creep stages and a good agreement was achieved between the experimental and predicted creep curves. For dynamic loads that change with time, the widely used time hardening and strain hardening models were implemented with a reasonable performance. These properties fulfil the requirements of a creep model for structural analysis.

    A thermo-fluid-structure coupling approach was developed by coupling the ANSYS Fluent for the fluid dynamics of melt pool heat transfer and ANSYS Structural for structural mechanics of RPV. An extension tool was introduced to realize transient load transfer from ANSYS Fluent to Structural and minimize the user effort. Both CFD with turbulence models and the effective model PECM can be employed for predicting melt pool heat transfer. The modified theta projection model was used for creep analysis of the RPV. The coupling approach does not only capture the transient thermo-fluid-structure interaction feature, but also support the advanced models in both melt pool convection and structural mechanics to improve fidelity and facilitate implementation. The coupling approach performs well in the validation against the FOREVER-EC2 experiment, and can be applied complex geometries, such as a BWR lower head with forest of penetrations (control rod guide tubes and instrument guide tubes).

    In the comparative analysis, the VLM and SLM coupling approaches generally have the similar performance, in terms of their predictability of the FOREVER-EC2 experiment and applicability to the reactor case. Though the SLM approach predicted slightly earlier failure times than VLM in both cases, the difference was negligible compared to the large scale of vessel failure time (~  s). The VLM approach showed higher computational efficiency than the SLM.

    The idea of the hybrid coupling is to employ a lumped-parameter code for quick estimate of thermal load which can be employed in detailed structural analysis. Such a coupling approach can significantly increase the calculation efficiency which is important to the case of a prototypical RPV where mechanistic simulation of melt pool convection is computationally expensive and unnecessary. The transIVR code was developed for this purpose, which is not only capable of quick estimate of transient heat transfer of one- and two- layer melt pool, but also solving heat conduction problem in the RPV wall with 2D finite difference method to provide spatial thermal details for RPV structural analysis. The capabilities of transIVR in modelling two-layer pool heat transfer and transient pool heat transfer were demonstrated by calculations against the UCSB FIBS benchmark case and the LIVE-7V experiment, respectively. The transIVR code was then coupled to the mechanical solver ANSYS Mechanical for detailed RPV failure analysis. Validation against the FOREVER-EC2 experiment indicates the coupling framework successfully captured the vessel creep failure characteristics.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-10 10:00 online via Zoom
    Wang, Bochao
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics and Engineering Acoustics, Marcus Wallenberg Laboratory MWL.
    Constitutive models of magneto-sensitive rubber under a continuum mechanics basis and the application in vibration isolation2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Due to its durability, stretchability, relatively low stiffness and high damping, rubber is widely used in engineering anti-vibration fields. However, a major deficiency is that once installed, the mechanical properties of traditional rubber-based devices are fixed where its adaptability to various loading conditions is poor. An alternative to traditional rubber materials is magneto-sensitive (MS) rubber. The main componentsof MS rubber are a rubber matrix and ferromagnetic particles. Under a magnetic field, the modulus of MS rubber can be altered rapidly and reversibly. Therefore, compared with conventional rubber-based devices, the stiffness of MS rubber-based devices can be adapted to various loading conditions and an enhanced vibration reduction effect can be achieved. Measurement results revealed that the mechanical behavior of MS rubber is not simple. To be specific, the dynamic modulus of MS rubber has a magnetic, frequency,amplitude and temperature dependency. In order to promote the applications of MS rubber in the anti-vibration area, models to depict the above properties are needed. The main goal of this thesis is to model the magnetic, frequency, amplitude and temperature dependence of MS rubber under a continuum mechanics basis. The research results regarding the constitutive modeling consist of three papers (Paper A, C and D). The simulation results show a good agreement with the measurement data, which proves the accuracy and feasibility of the developed model. In addition to the constitutive models of MS rubber, an investigation of MS rubber application in the vibration isolation system under harmonic and random loading cases is numerically conducted (Paper B). In order to achieve an enhanced vibration isolation effect, two control algorithms corresponding to the harmonic and random loading are developed. Numerical results verify that the vibration isolation effect ofMS rubber vibration isolator is better than the traditional rubber-based isolator. In this thesis, the model developed for MS rubber deepens the understanding of how magnetic, frequency, amplitude and temperature affect the mechanical performance of MS rubber. Moreover, the research of MS rubber application in vibration isolators and the corresponding control strategies are helpful for the design of MS rubber-based anti-vibration devices.

    Download full text (pdf)
    Constitutive models of magneto-sensitive rubber under a continuum mechanics basis and the application in vibration isolation
  • Public defence: 2020-08-18 13:00 Videolänk kommer / Video link is forthcoming, Stockholm
    de Frias Lopez, Ricardo
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    DEM Modelling of Unbound Granular Materials for Transport Infrastructures: On soil fabric and rockfill embankments2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Unbound granular materials (UGM) are widely used as load-bearing layers and for embankment construction within transport infrastructures. These play a significant role on operation and maintenance of transportation systems. However, pavement and railway engineering still today rely heavily on empirical models based on macroscopic observations. This approach results in limited knowledge on the fundamentals at particle scale dictating the macroscopic response of the material. In this sense, the discrete element method (DEM) presents a numerical alternative to study the behaviour of discrete systems with explicit consideration of processes at particulate level. Additionally, it allows obtaining information at particulate level in a way that cannot be matched by traditional laboratory testing. All of this, in turn, can result in greater micromechanical insight.This thesis aims at contributing to the body of knowledge of the fundamentals of granular matter. UGM for transport infrastructures are studied by means of DEM in order to gain insight on their response under cyclic loading. Two main issues are considered: (1) soil fabric and its effect on the performance of coarse-fine mixtures and (2) modelling of high rockfill railway embankments. Among the main contributions of this research there is the establishing of a unified soil fabric classification system based exclusively on force transmission considerations that furthermore correlates with performance. In particular, fabrics characterized by a strong interaction between the coarse and fine fractions resulted in improved performance. A soil fabric type with a potential for instability was also identified. Regarding embankments, DEM modelling shows that traffic induced settlements accumulate on the top layers and therefore seem to be unaffected by embankment height above a certain value. A marked influence of degradation, even considering its nearly negligible magnitude, was observed, largely resulting in increased settlements.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-11 13:00 Via Zoom -- https://kth-se.zoom.us/meeting/register/u5Isf--grTguHNPDOGMrkrpy5nka38XCSnZG, Stockholm
    Quino Lima, Israel
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Water and Environmental Engineering. Universidad Mayor de San Andres.
    Hydrogeochemistry and spatial variability of arsenic and other trace elements in the Lower Katari Basin around Lake Titicaca, Bolivian Altiplano.: Impact on drinking water quality and groundwater management.2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Arsenic (As) contamination in drinking water is a world-wide problem. Thenatural origin of As, its mobility and transport are of great interest in BolivianAltiplano (Lower Katari Basin: LKB and Sothern Poopó Basin: SPB) due topresence of mineral ore deposits, brines, hot springs and volcanic rocks.Hydrogeochemical spatio-temporal and spatial variability investigations wereapplied to groundwater, surface water and sediments with a statistical approachto better understand the spatial distribution of As, major ions and trace elements,and evaluate the sources of dissolved species and elucidate the processes thatgovern the evolution of natural water in the LKB. The result reveal high levelsof As, boron (B), antimony (Sb), manganese (Mn) and salinity in shallow wells,which exceeds the guideline values of the Bolivian regulation (NB-512) andWorld Health Organization (WHO). The seasonal variation and its impact onthe water quantity, on top of the solids and liquid residual (origin Pallina River)poses significant negative health risk for the community at the banks of theKatari River. The first evaluation of the hydrogeological study indicates that thegroundwater flow was observed in the direction southeast - northwest (SE -NW), and there is an interaction between groundwater and surface water. Thespatial distribution of As varies considerably due to geological characteristics ofthe area as well as due to the heterogeneously distributed evaporites in thesediments (in LKB and SPB). However, the highest concentrations of As arefound in the alluvial sediments of the northern region. Sequential extraction ofsediment along with geochemical modeling (mineral saturation indices) indicatesthat the iron (Fe) and aluminum (Al) oxides as well as their hydroxides are mostimportant adsorbent minerals of As in central and southern region of LKB. Thechemistry of water bodies in LKB and SPB is strongly influenced by theinteraction with the sediment constituents and by the spatial-temporal variations.The results of spatial analysis indicate that despite of the outliers there is a goodautocorrelation for As, B and Sb, since Moran's I values are positive. The globalspatial dependence analysis indicated a positive and statistically significant spatialautocorrelation (SA) for all cases and TEs are not randomly distributed at 99%confidence level.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-09 09:00 https://kth-se.zoom.us/webinar/register/WN_qgz5ej_sQVOiEwrSoar3Ew, Stockholm
    Stefanikova, Estera
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Fusion Plasma Physics.
    Pedestal structure and stability in JET-ILW and comparison with JET-C2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Controlled thermonuclear fusion offers a promising concept for safe and sustainable production of electrical energy. However, there are still many issues to be investigated on the way to a commercial fusion reactor. An important point for detailed studies is connected to wall materials surrounding hot thermonuclear plasma. The JET tokamak (the largest fusion experiment in the world) in the United Kingdom has completed a major upgrade in 2011 in which the materials of the vessel surrounding the fusion fuel have been changed from a carbon-fibre-composite (or JET-C wall) to Beryllium and Tungsten. These new materials are the same as those that will be used in a next step fusion device International Thermonuclear Experimental Reactor ITER (hence the name ITER-like wall or JET-ILW), designed to demonstrate the feasibility of fusion reactor based on the tokamak concept. One of the goals of JET with the ILW is to act as a test bed for ITER technologies and for ITER operating scenarios.

    The overall purpose of the thesis work is to characterise the effect of the ILW on the structure and stability of edge plasma phenomenon called the pedestal, a steep pressure gradient associated with the H-mode, an operational regime with improved confinement. The aim is to contribute to the understanding of the difference in the pedestal performance between JET-C and JET-ILW.

    The work is focused on experimental characterisation of the pedestal structure in deuterium discharges by analysing the experimental data (radial profiles of electron temperature and density measured in H-mode plasmas) from Thomson scattering diagnostics at JET and on investigating the differences in pedestal stability between JET-ILW and JET-C plasmas in terms of the pedestal modelling. The pedestal structure is determined using a modified hyperbolic tangent fit to the experimental Thomson scattering profiles. The modelling is performed with the pedestal predictive code Europed, based on the EPED model commonly used to predict the pedestal height in JET.

    The experimental analysis has shown several differences in the pedestal structure of comparable JET-ILW and JET-C discharges. One of the key differences introduced in this work is the pedestal relative shift (a separation between the middle of the pedestals of the electron density and temperature) that plays a major role in the difference in the pedestal performance between JET-C and JET-ILW. The work shows that the relative shift can vary significantly from pulse to pulse and that, on average, JET-C plasmas have lower relative shift than JET-ILW plasmas. The pedestal relative shift tends to increase with increase in the gas fuelling and the heating power. Furthermore, the increase in the relative shift has been empirically correlated with the degradation of the experimental normalized pressure gradient αexp.

    To understand the differences in the JET-C and JET-ILW pedestal stability, parameters that affect the pedestal stability and that tend to vary between comparable JET-C and JET-ILW discharges have been identified. These parameters are the pedestal relative shift, pedestal density neped, effective charge number Zeff, pedestal pressure width wpe, and normalized pressure βN. The modelling performed with the predictive Europed code has shown that these five parameters are sufficient to explain the difference in the pedestal performance between JET-C and JET-ILW.

    Furthermore, the modelling has shown that the relative shift and neped play a major role in affecting the critical normalized pressure gradient αcrit (normalized pressure gradient expected by the model comparable to αexp), while the relative shift, wpe and Zeff have a major impact on the pedestal pressure height. Finally, a possible mechanism that has led to the degradation of the pedestal pressure from JET-C to JET-ILW is proposed.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 14:00 Zoom-webinar: https://kth-se.zoom.us/webinar/register/WN_Atk97qAwS9OiEW8dTC0Fyg
    Held, Manne
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering and Computer Science (EECS).
    Fuel-Efficient Look-Ahead Control for Heavy-Duty Vehicles with Varying Velocity Demands2020Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The fuel consumption of heavy-duty vehicles can be reduced by using information about the upcoming road section when controlling the vehicles. Most manufacturers of heavy-duty vehicles today offer such look-ahead controllers for highway driving, where the information consists of the road grade and the velocity only has small variations. This thesis considers look-ahead control for applications where the velocity of the vehicle has large variations, such as distribution vehicles or vehicles in mining applications. In such conditions, other look-ahead information is important, for instance legal speed limits and curvature. Fuel-efficient control is found by formulating and solving the driving missions as optimal control problems.

    First, it is shown how look-ahead information can be used to set constraints in the optimal control problems. A velocity reference from a driving cycle is modified to create an upper and a lower bound for the allowed velocity, denoted the velocity corridor. In order to prevent the solution of the optimal control problem from deviating too much from a normal way of the driving, statistics derived from data collected during live truck operation are used when formulating the constraints. It is also shown how curvature and speed limits can be used together with actuator limitations and driveability considerations to create the velocity corridor.

    Second, a vehicle model based on forces is used to find energy-efficient velocity control. The problem is first solved using Pontryagin's maximum principle to find the energy savings for different settings of the velocity corridor. The problem is then solved in a receding horizon fashion using a model predictive controller to investigate the influence of the control horizon on the energy consumption. The phasing and timing of traffic lights are then added to the available information to derive optimal control when driving in the presence of traffic lights.

    Third, the vehicle model is extended to include powertrain components in two different approaches. In a first approach, a Boolean variable is added to represent open or closed powertrain. This enables the vehicle to freewheel, in order to save fuel by reducing the losses due to engine drag. The problem is formulated as a mixed integer quadratic program. In a second approach, the full powertrain is modeled including a fuel map and a model of the gearbox losses, both based on measurements on real components. The problem is solved using dynamic programming, with transitions between states including gear shifts, freewheeling, and coasting in gear.

    Forth, the optimal control framework is used to implement an optimal control-based powertrain controller in a real Scania truck. The problem is first solved offline resulting in trajectories for velocity and freewheeling. These are used online in the vehicle as references to the existing controllers for torque and gear demands. Experiments are performed with fuel measurements, resulting in 16% fuel savings, compared to 18% savings by solving the optimal control problem.

    Download full text (pdf)
    fulltext
  • Mele, Giampaolo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Ringh, Emil
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Ek, David
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Izzo, Federico
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Upadhyaya, Parikshit
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Preconditioning for linear systems2020Book (Other academic)
    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-05 14:00 Publikt via ZOOM, Stockholm
    Törnblom, Oskar
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Organizational Design and Leadership Development: The Role of Increasing Complexity​2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Many organizations need to adapt to increasingly complex environments. New forms of organizational design and leadership are called for and, under some circumstances, more collective leadership practices are needed. Furthermore, values and beliefs in some societal contexts foster a general positive bias for collective leadership. Paradoxically, many investment decisions regarding leadership development activities do not pay off. At the same time, the research fields of collective leadership development and on-the-job leader development are underutilized. The research field of leadership is in need of consolidation and integration within and between research areas. There has been much less research done on collective leadership development compared with leader development, and research on leadership development has been focused more on individual and collective change rather than on contextual facilitating factors such as organizational design.

    To address these theoretical and practical challenges, the aim of the thesis was to explore organizational design and leadership development in terms of increasing complexity in the empirical context of technology-, knowledge-, and project-intensive organizations. The research design was centered around two studies that were part of a larger interactive research project and two conceptual studies that jointly investigated (1) organizational design and increasing complexity, (2) leadership development and increasing complexity, (3) how increasingly complex organizational design can foster leadership development. The interactive research project had four goals in terms of creating common learning for the project partners involved, new academic knowledge, and organizational development not only for the participating organizations but also for organizations in general.

    The thesis contributes to the research fields of organizational design and leadership development as well as their intersection. It adds to theory by providing a more fine-grained definition of ways of understanding leadership development according to increasing complexity. Furthermore, it adds to the understanding of how increasingly complex organizational design can foster leadership development, especially collective leadership, thus demonstrating empirical examples of leadership development without traditional leadership development investments.

    The thesis proposes future research on emerging technology as an accelerator for leadership development and interactive research in partnership with organizations in order to further integrate the research fields of organizational design and leadership development. In terms of managerial contributions, a number of suggestions are offered to support better knowledge creation and decision-making regarding organizational design, on-the-job leader development, and especially collective leadership development. Furthermore, a shift from a psychology-centered leadership development approach toward more of a systemic and organizational design-centered leadership development approach that includes both individual and collective dimensions is called for. This shift will potentially change the leadership development industry, making many of the contemporary investments in leadership development obsolete.

    Download full text (pdf)
    fulltext
    Download full text (pdf)
    Appendix 6 Key theoretical definitions in brief
  • Public defence: 2020-06-12 16:00 Seminar Room, Floor 5, Stockholm
    Gomez-Torrent, Adrian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Micro and Nanosystems.
    Submillimeter-Wave Waveguide Frontends by Silicon-on-Insulator Micromachining2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis presents novel radiofrequency (RF) frontend components in the submillimeter-wave (sub-mmW) range implemented by silicon micromachining, or deep reactive ion etching (DRIE). DRIE is rapidly becoming a driving technology for the fabrication of waveguide components and systems when approaching the terahertz (THz) frequency range. The conventional method to manufacture microwave waveguide components, CNC-milling, shows important limitations when used at sub-mmW frequencies or above, due to the reduced size of the waveguides. At the same time, the classic electromagnetic designs, oriented to CNC-milling, are often not suitable for their fabrication using alternative technologies. The work in this thesis aims to develop fabrication-oriented electromagnetic structures, making use of the full flexibility of silicon on insulator (SOI) micromachining, and enabling the implementation of complex RF frontends at a low fabrication complexity.

    The first part of the thesis reports on a turnstile orthomode transducer (OMT) in the WM-864 band (220 – 330 GHz). OMTs are key components in the feed-chain for radio astronomy, communications, or radiometry applications. However, their complex geometry has often limited their use when approaching the THz range, where polarization diversity is commonly avoided, or optical systems are preferred.

    The second part reports on a high-gain and broadband waveguide corporatefed array antenna in the WM-570 band (330 – 500 GHz). High gain and broadband antennas are required for the future generation of THz wireless communications. Reflector and lens antennas can meet these specifications, but their fabrication for the THz range requires precision machining, resultingin a high cost, low yield, and small scale production. The use of silicon micromachined antenna arrays overcomes these issues while providing a more compact frontend.

    In the third part of the thesis, a parallel plate waveguide (PPW) leaky wave antenna (LWA) fed by a quasi-optical beamforming network (BFN) in the WM-864 band is presented. The antenna frontend generates a pencil shaped beam scanning in elevation. The compact design, large bandwidth, and beam steering capabilities make this antenna a suitable frontend for THz radar applications.

    The final part of this thesis reports on a novel waveguide single pole double throw (SPDT) switch in the WM-570 band. The switch is demonstrated in a two-port network configuration with two switching states (ON/LOAD), used for receiver calibration, or for avoiding backward waves in transmitter switching. A more complex 1×4 switching matrix is also designed for the implementation of an active radar antenna operating at 340 GHz.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-10 10:00 N/A (Via videolink due to Corona virus)
    Tomasson, Egill
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Impact of High Levels of Variable Renewable Energy on Power System Generation Adequacy: Methods for analyzing and ensuring the generation adequacy of modern, multi-area power systems2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The generation adequacy of electricity supply has been an ongoing concern since the restructuring of the industry. Ensuring generation adequacy was a rather straightforward task in the era of natural monopolies. Whose responsibility was it to ensure generation adequacy as the industry became deregulated and more fragmented? Who is willing to finance rarely used generating units? After decades of experience with the competitive electricity market, the question of whether market forces alone are sufficient to ensure generation adequacy still remains.

    Recent energy policies have moreover set a goal of a high share of renewable energy in electricity markets. The presence of high levels of renewable generation makes the supply side of the market more uncertain. This volatility in energy production induces volatility in energy prices which means that the revenue stream of conventional generating technologies is more uncertain than it has traditionally been. This can even deteriorate the economics of some generators to the point where they exit the electricity market. The installed capacity of dispatchable generation can therefore be reduced.

    These developments bring up the question of whether the generation adequacy of modern and future, deregulated and highly variable power systems is ensured. This dissertation focuses on modeling the generation adequacy of modern power systems with a high penetration of variable renewable energy sources. Moreover, the dissertation looks at some solutions with the aim of ensuring the generation adequacy of such systems through various means such as coordinated reserves, energy storage as well as utilizing the flexibility of the demand side of the market.

    The models developed in this dissertation are verified using well-known test systems as well as through large-scale analysis of real-world systems. Aside from focusing on the simulation of power systems, the developed models moreover focus on achieving high computational efficiency. This is done through means such as advanced Monte Carlo simulation and optimization methods that apply decomposition to speed up the simulations.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-12 10:00 Stockholm
    Bergendal, Erik
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry, Surface and Corrosion Science.
    Fatty Acid Self-Assembly at the Air–Water Interface: Curvature, Patterning, and Biomimetics: A Study by Neutron Reflectometry and Atomic Force Microscopy2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    For more than a hundred years of interfacial science, long chain fatty acids have been the primary system for the study of floating monolayers at the air–water interface due to their amphiphilic nature and system simplicity: an insoluble hydrocarbon chain and a soluble carboxyl group at a flat air–water interface. Despite―or perhaps rather due to―the assumed simplicity of such systems and the maturity of the research field, the seemingly fundamentally rooted notion of a two-dimensional water surface has yet to be challenged.

    The naturally occurring methyl-branched long chain fatty acid 18-methyleicosanoic acid and one of its isomers form monolayers consisting of monodisperse domains of tens of nanometres, varying in size with the placement of the methyl branch. The ability of domain-forming monolayers to three-dimensionally texture the air–water interface is investigated as a result of hydrocarbon packing constraints owing to the methyl branch.

    In this work, neutron reflectometry has been used to study monolayers of branched long chain fatty acids directly at the air–water interface, which allowed precise probing of how a deformable water surface is affected by monolayer structure. Such films were also transferred by Langmuir–Blodgett deposition to the air–solid interface, and subsequently imaged by atomic force microscopy. Combined, the results unanimously―and all but unambiguously―show that the self-assembly of branched long chain fatty acids texture the air–water interface, inducing domain formation by a local curvature of the water surface, and thus controverting the preconceived notion of a planar air–water interface. The size and shape of the observed domains are shown to be tuneable using three different parameters: in mixed systems of branched and unbranched fatty acids, with varying hydrocarbon length of the straight chain, and altering subphase electrolyte properties. Each of these factors effectively allows changing the local curvature of the monolayer, much like analogous three-dimensional systems in bulk lyotropic crystals. This precise tuneability opens up for sustainable nanopatterning. Finally, the results lead to a plausible hypothesis of self-healing properties as to why the surface of hair and wool have a significant proportion of branched fatty acid.

    Download full text (pdf)
    (fulltext) Fatty Acid Self-Assembly at the Air–Water Interface
  • Li, Yuanyuan
    et al.
    KTH, School of Chemical Science and Engineering (CHE), Fibre and Polymer Technology. KTH, School of Chemical Science and Engineering (CHE), Centres, Wallenberg Wood Science Center.
    Fu, Qiliang
    Rojas, Ramiro
    KTH, School of Chemical Science and Engineering (CHE), Centres, Wallenberg Wood Science Center. KTH, School of Chemical Science and Engineering (CHE), Fibre and Polymer Technology, Coating Technology.
    Yan, Min
    KTH, School of Electrical Engineering (EES).
    Lawoko, Martin
    KTH, School of Chemical Science and Engineering (CHE), Fibre and Polymer Technology. KTH, School of Chemical Science and Engineering (CHE), Centres, Wallenberg Wood Science Center.
    Berglund, Lars
    KTH, School of Chemical Science and Engineering (CHE), Centres, Wallenberg Wood Science Center. KTH, School of Chemical Science and Engineering (CHE), Fibre and Polymer Technology.
    Lignin-Retaining Transparent Wood2017In: ChemSusChem, ISSN 1864-5631, E-ISSN 1864-564X, Vol. 10, no 17, p. 3445-3451Article in journal (Refereed)
    Abstract [en]

    Optically transparent wood, combining optical and mechanical performance, is an emerging new material for light-transmitting structures in buildings with the aim of reducing energy consumption. One of the main obstacles for transparent wood fabrication is delignification, where around 30wt% of wood tissue is removed to reduce light absorption and refractive index mismatch. This step is time consuming and not environmentally benign. Moreover, lignin removal weakens the wood structure, limiting the fabrication of large structures. A green and industrially feasible method has now been developed to prepare transparent wood. Up to 80wt% of lignin is preserved, leading to a stronger wood template compared to the delignified alternative. After polymer infiltration, a high-lignin-content transparent wood with transmittance of 83%, haze of 75%, thermal conductivity of 0.23WmK(-1), and work-tofracture of 1.2MJm(-3) (a magnitude higher than glass) was obtained. This transparent wood preparation method is efficient and applicable to various wood species. The transparent wood obtained shows potential for application in energy-saving buildings.

    Download full text (pdf)
    fulltext
  • Quino Lima, Israel
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Water and Environmental Engineering. KTH Royal Institute of Technology.
    Ormachea Muñoz, Mauricio
    Universidad Mayor de San Andrés.
    Ramos Ramos, Oswaldo Eduardo
    Universidad Mayor de San Andrés.
    Bhattacharya, Prosun
    KTH, School of Architecture and the Built Environment (ABE), Land and Water Resources Engineering (moved 20130630), Environmental Geochemistry and Ecotechnology. KTH, Superseded Departments (pre-2005), Civil and Environmental Engineering. KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Water and Environmental Engineering.
    Quispe Choque, Raul
    Universidad Mayor de San Andrés.
    Quintanilla Aguirre, Jorge
    Universidad Mayor de San Andrés.
    Sracek, Ondra
    Palacky´ University.
    Hydrochemical assessment with respect to arsenic and other trace elementsin the Lower Katari Basin, Bolivian Altiplano2019In: Groundwater for Sustainable Development, p. 281-293Article in journal (Other academic)
    Abstract [en]

    Hydrochemical investigations of groundwater and surface water were carried out to better understand the spatial distribution of As, major ions and trace elements.The study was carried out to evaluate the sources of dissolved species and elucidate the processes that govern the evolution of natural water in the Lower Katari Basin.The study area is close to the Titicaca Lake (Cohana Bay) formed by sediments of the Quaternary system, deposited in the fluvio-glacial to fluvio-lacustrineenvironment and geologic formations of the Devonian and Neogene system of volcanic origin. The study area has several environmental problems mainly caused bycontaminants such as heavy metals, nutrients, and bacteria. These problems are linked to the urban and industrial wastes, natural geologic conditions, and miningactivities carried out upstream of the Katari Basin, where rivers discharge into the Cohana Bay.A total of 37 water samples were collected during wet season, 31 groundwater samples including drinking water wells and six surface water samples. Thehierarchical cluster analysis and principal component analysis were applied to hydrochemical data. Results show high salinity in groundwater related to theevaporation causing serious problems for the groundwater quality and rendering it unsuitable for drinking. Dissolved As concentration ranges from 0.7 to 89.7 μg/L;the principal source of As could be the alteration of volcanic rocks, more than 48% of the shallow groundwater samples exceeded the WHO guideline value for As andmore than 22% for NO3-. Groundwater has neutral to slightly alkaline pH, and moderately oxidizing character. The groundwater chemistry reveals considerablevariability, ranging from Na-SO4,Cl type through mixed Na-HCO3 type and Ca,Na-HCO3,Cl type. The distribution of trace elements shows a large range of concentrations.Speciation of As indicates that the predominant oxidation state is As (V). The geochemical modelling indicates that As could be associated with ironoxides and hydroxides which are probably the most important mineral phases for the As adsorption. The spatial distribution and the variation of dissolved Asconcentration in groundwater is governed by the variability in geological characteristics of the region that raises a significant concern about drinking water quality.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-04 15:00 https://kth-se.zoom.us/j/67302879470, Stockholm
    Zhang, Liang
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Industrial Biotechnology. AdBIOPRO, VINNOVA Competence Centre for Advanced Bioproduction by Continuous Processing.
    Development of mathematical modelling for the glycosylation of IgG in CHO cell cultures2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Chinese hamster ovary (CHO) cells are the most popular expression system for the production of biopharmaceuticals. More than 80% of the approved monoclonal antibodies (mAbs) or immunoglobulin G (IgG) are produced with these cells. Glycosylation is a usual post- translational modification important for therapeutic mAbs. It affects their stability, half-life and immunological activities. Substantial studies have shown that glycosylation can be affected by the culture conditions in manufacturing, e.g. pH, temperature and media components. To achieve a good control of the glycosylation, a number of mathematical models have been developed. However, most of them have been developed for the cell line engineering, while very few can be used to design the media components for matching a given glycoprofile.

    This thesis presents developments of mathematical modelling for glycosylation prediction and experimental design of feeding different combinations of carbon sources in CHO cell cultures. The first study investigates the impacts of mannose, galactose, fructose and fucose to the IgG glycoprofile. Specifically, we look at intracellular nucleotide sugars in fed-batch cultures, where glucose is absent and lactate is used as complementary carbon source. The second study is based on the concept of elementary flux mode (EFM) and the mass balance of the glycan residues. A mathematical model named Glycan Residue Balance Analysis (GReBA) is developed for the prediction of the glycosylation profiles of IgG in pseudo perfusion cultures by feeding combinations of glucose, mannose, galactose and lactate. The model is further optimized for a feeding strategy design of perfusion cell cultures to obtain a desired glycoprofile. In the last study, a probabilistic graphic model based on Bayesian network (BN) is developed for glycosylation prediction in cultures under different multiple variable factors affecting the glycosylation.

    The results show that the manipulation of different sugars in the media can be used to control the glycosylation. Both the GReBA and PGM models exhibit abilities for glycosylation prediction and experimental design.

    Download full text (pdf)
    fulltext
  • Li, Junhao
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Theoretical Chemistry and Biology.
    Li, Weihua (Contributor)
    East China University of Science and Technology.
    Tu, Yaoquan (Contributor)
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Theoretical Chemistry and Biology.
    Mechanistic Insights into the Regio‐ and Stereoselectivities of Testosterone and Dihydrotestosterone Hydroxylation Catalyzed by CYP3A4 and CYP19A12020In: Chemistry - A European Journal, ISSN 0947-6539, E-ISSN 1521-3765, Vol. 26, p. 6214-6223Article in journal (Refereed)
    Abstract [en]

    The hydroxylation of nonreactive C−H bonds can be easily catalyzed by a variety of metalloenzymes, especially cytochrome P450s (P450s). The mechanism of P450 mediated hydroxylation has been intensively studied, both experimentally and theoretically. However, understanding the regio‐ and stereoselectivities of substrates hydroxylated by P450s remains a great challenge. Herein, we use a multi‐scale modeling approach to investigate the selectivity of testosterone (TES) and dihydrotestosterone (DHT) hydroxylation catalyzed by two important P450s, CYP3A4 and CYP19A1. For CYP3A4, two distinct binding modes for TES/DHT were predicted by dockings and molecular dynamics simulations, in which the experimentally identified sites of metabolism of TES/DHT can access to the catalytic center. The regio‐ and stereoselectivities of TES/DHT hydroxylation were further evaluated by quantum mechanical and ONIOM calculations. For CYP19A1, we found that sites 1β, 2β and 19 can access the catalytic center, with the intrinsic reactivity 2β>1β>19. However, our ONIOM calculations indicate that the hydroxylation is favored at site 19 for both TES and DHT, which is consistent with the experiments and reflects the importance of the catalytic environment in determining the selectivity. Our study unravels the mechanism underlying the selectivity of TES/DHT hydroxylation mediated by CYP3A4 and CYP19A1 and is helpful for understanding the selectivity of other substrates that are hydroxylated by P450s.

    Download full text (pdf)
    fulltext
  • Public defence: 2020-06-05 10:00 Publikt via ZOOM, Stockholm
    Ternstedt, Patrik
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering.
    A Study of Parameters that Influence the Kinetics of the AOD Decarburisation Process2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis focuses on the AOD process, which is an important metallurgical reactor in stainless steel production. To be more specific, the thesis is limited to study the decarburisation step, which is the first of three process steps in the AOD converter. The main research questions is to increase the knowledge of reasons for random differences in decarburisation rates during the process. In the first part of the study, physical modeling is used to study the mixing in AOD converters. Parameters that were studied included, bath heights, gas flow rates and chemical reactions. The results showed that the mixing time decreased with an increased gas flow rate or an increased bath height. In addition, the influence of the top slag on the fluid flow and mixing time was studied. The results showed that the flow field was influenced by the slag phase and that it is of importance to account for the solid slag fractions to simulate the fluid flow and mixing time to resemble AOD converters. However, the results from this first part of the thesis illustrates that mixing is not the rate-limiting step for decarburisation in AOD converters. Instead, the focus was shifted to study if the slag was the cause for random differences in the decarburistaion rate. Slag samples were collected from an industrial AOD reactor. These slags are quite unique since they contain mainly solids and a small liquid fraction. Thus, petrography was used to study the samples and a new methodology was developed to characterize the slag samples. Methods for characterising the top slag samples from the AOD process were established, including combinations of different techniques. The common slag phases in decarburisation slag were identified. The results showed good agreement with samples analysed with SEM and EDS compared to calulations made in Thermo-Calc. Overall, it was shown that the slag characteristics changes during the decarburization period and that these changes can be determined using the new methodology. In the last part of the thesis, the commercial AOD process control model TimeAOD2 was used in combination with Thermo-Calc calculations to study how the process could be improved so that the slag composition became most beneficial for improving the kinetics of the decarburisation part of the AOD converter process. The results show that it is possible to predict the slag composition and especially the amount of liquid slag in the sample. This in turn, makes it possible to better estimate the optimal lime addition depending on the silicon content in steel and the amount of carry-over slag from the electric arc furnace.  Furthermore, it is shown that to large lime additions will lead to an increased heating time while not improving the decarburization rate.

    Download full text (pdf)
    fulltext