1 - 16 of 16
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Public defence: 2019-12-16 10:00 FB42, Stockholm
    Rzeszutek, Elzbieta
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry, Glycoscience.
    Cell wall biosynthesis in the pathogenic oomycete Saprolegnia parasitica2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The oomycete Saprolegnia parasitica is a fungus-like microorganism responsible for the fish disease saprolegniosis, which leads to important economic losses in aquaculture. Currently, there is no efficient method to control the infection and therefore methods for disease management are urgently needed. One of the promising approaches to tackle the pathogen is the inhibition of cell wall biosynthesis, specifically the enzymes involved in carbohydrate biosynthesis. The cell wall of S. parasitica consists mainly of cellulose, β-1,3 and β-1,6-glucans, whereas chitin is present in minute amounts only. The available genome sequence allowed the identification of six putative chitin (Chs) and cellulose (CesA) synthase genes. The main objective of this work was to characterize CHSs and CesAs from S. parasitica and test the effect of cell wall related inhibitors on pathogen growth. The tested inhibitors included nikkomycin Z, a competitive inhibitor of CHS as well as inhibitors of cellulose biosynthesis, namely flupoxam, CGA325'615 and compound I (CI). All drugs strongly reduced the growth of S. parasitica and inhibited the in vitro formation of chitin or cellulose, demonstrated by the use of a radiometric assay. The chemicals also affected the expression of some of the corresponding Chs and CesA genes.

    One of the CHSs, namely SpCHS5, was successfully expressed in yeast and purified to homogeneity as a full length protein. The recombinant enzyme was biochemically characterized and demonstrated to form chitin crystallites in vitro. Moreover, our data indicate that SpCHS5 most likely occurs as a homodimer which can further assemble into larger multi-subunit complexes. Point mutations of conserved amino acids allowed us to identify the essential residues for activity and processivity of the enzyme.

    In addition to the cell wall related inhibitors, a biosurfactant naturally produced by Pseudomonas species, massetolide A, was tested, showing strong inhibition of S. parasitica growth.

    Altogether, our data provide key information on the fundamental mechanisms of chitin and cellulose biosynthesis in oomycetes and the biochemical properties of the enzymes involved. They also demonstrate that the enzymes involved in cell wall biosynthesis represent promising targets for anti-oomycete drugs, even when the corresponding polysaccharides, such as chitin, occur in small amounts in the cell wall.

  • Public defence: 2019-12-16 10:00 F3, Stockholm
    Sommerfeldt, Nelson
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Applied Thermodynamics and Refrigeration.
    Solar PV in prosumer energy systems: A techno-economic analysis on sizing, integration, and risk2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In the transition towards a sustainable energy system, building mounted solar photovoltaics (PV) have unique benefits; they require no additional land and the energy is generated directly at load centers. Within residential buildings, multi-family homes (MFH) are particularly interesting because of the economies of scale and their greater potential for emissions reductions.

    This thesis identifies and describes value propositions for solar PV within Swedish multi-family houses via three branches of inquiry; system sizing optimization, quantification of investment risk, and the techno-economic potential of PV/thermal (PVT) collectors integrated with ground source heat pumps (GSHP). Underpinning these investigations is a comprehensive review of technical and economic models for solar PV, resulting in a catalogue of performance indicators and applied techniques.

    From the sizing analysis, no objective, techno-economically optimal PV system size is found without including the prosumer’s personal motives. Prioritizing return on investment results in small systems, whereas systems sized for net-zero energy can be profitable in some buildings. There is also a strong economic incentive to adopt communal electricity metering to increase self-consumption, system size, and economic return. Monte Carlo analysis is used to quantify investment uncertainty, finding that well-designed systems have an 81% chance of earning a 3% real return on investment, and even without subsidies there is a calculated 100% chance of having a positive return. PVT integrated GSHP can reduce the land needed for boreholes by up to 87% with a lower lifecycle cost than district heating, thereby broadening the heat pump market and reducing barriers to heating electrification.

    The quantitative results provide guidance for Swedish MFH owners while the methodology presents solar PV value in a more useful manner for prosumers to identify their personal motives in decision making. This approach is also useful for researchers, business leaders, and policy makers to understand the prosumer perspective and promote adoption of PV in the built environment.

  • Public defence: 2019-12-16 10:00 Kollegiesalen, Stockholm
    Aurell, Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in the mean-field type approach to pedestrian crowd modeling and conventions2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of five appended papers, primarily addressingtopics in pedestrian crowd modeling and the formation of conventions.The first paper generalizes a pedestrian crowd model for competingsubcrowds to include nonlocal interactions and an arbitrary (butfinite) number of subcrowds. Each pedestrian is granted a ’personalspace’ and is effected by the presence of other pedestrians within it.The interaction strength may depend on subcrowd affinity. The paperinvestigates the mean-field type game between subcrowds and derivesconditions for the reduction of the game to an optimization problem.The second paper suggest a model for pedestrians with a predeterminedtarget they have to reach. The fixed and non-negotiablefinal target leads us to formulate a model with backward stochasticdifferential equations of mean-field type. Equilibrium in the game betweenthe tagged pedestrians and a surrounding crowd is characterizedwith the stochastic maximum principle. The model is illustrated by anumber of numerical examples.The third paper introduces sticky reflected stochastic differentialequations with boundary diffusion as a means to include walls andobstacles in the mean-field approach to pedestrian crowd modeling.The proposed dynamics allow the pedestrians to move and interactwhile spending time on the boundary. The model only admits a weaksolution, leading to the formulation of a weak optimal control problem.The fourth paper treats two-player finite-horizon mean-field typegames between players whose state trajectories are given by backwardstochastic differential equations of mean-field type. The paper validatesthe stochastic maximum principle for such games. Numericalexperiments illustrate equilibrium behavior and the price of anarchy.The fifth paper treats the formation of conventions in a large populationof agents that repeatedly play a finite two-player game. Theplayers access a history of previously used action profiles and form beliefson how the opposing player will act. A dynamical model wheremore recent interactions are considered to be more important in thebelief-forming process is proposed. Convergence of the history to acollection of minimal CURB blocks and, for a certain class of games,to Nash equilibria is proven.

  • Public defence: 2019-12-17 10:00 Kollegiesalen, Stockholm
    Su, Chang
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Applied Thermodynamics and Refrigeration.
    Building heating solutions in China: A spatial system analysis2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modern, clean, accessible and affordable building space heating is key tofuture sustainable development in China. However, it is impossible to recommendidentical building space heating solutions for all spaces in such alarge country as China. The decision making for choosing the most feasiblebuilding space heating solution is associated with a number of local characteristic spatial parameters, and stakeholders are still suffering from insufficient understanding of at which locations and under what conditions to choose a certain technology. Therefore, the present thesis aims at llingthis research gap by four steps: first, review current space heating situationin China; second, develop a systematic evaluation method for proper choice on building heating solution in different geolocations of China; third,demonstrate the efficacy of proposed method by case studies; fourth, analysethe Chinese energy sector administration infrastructure and its influence on building heating solutions.

    Step one is to understand the current status of building space heating in China, including what technologies currently prevail and where they are implemented, as well as their application scales. It is found that under existing energy structures, coal as the primary energy source is extensively consumed in space heating systems. Coal-based regional boilers and combined heat and power district heating is prevalent in North China. Distributed heating, such as reversible air-conditioners, is still dominating South China. During past decade, sustainable energy space heating is increasing rapidly under a series of national policy initiatives, and will continue to grow in the future.

    Following the current status review, a systematic method featured by spatial analysis is developed to compare the various heating options and find the best alternative. The method contains three system boundary levels, which reflect the characters of space heating technology, heat source, heat sink as well as the primary energy system. In each system level, local spatial parameters are analyzed. A set of key performance indicators is selected to quantitatively compare the relative advantages and disadvantages of implementing one building space heating solution over another from techno-economic-environmental as well geographical perspectives.

    Case studies are then carried out to demonstrate the application of the method. In case study one, two Chinese cities with different local spatial conditions are chosen. Ground source heat pumps and air source heat pumps are compared with status-quo space heating solutions, which are coal boilers and electric boilers. The results lie in three aspects. Technically, heat pumps are more efficient than boilers from a primary energy point of view. Economically, ground source heat pumps have to reach a satisfying seasonal coefficient of performance value of 3.7 for a competitive payback period against existing heating solutions. Environmentally, heat pumps have to reach a critical seasonal coefficient of performance value around 2.5 to guarantee their environmental advantages compared with directly burning coal for space heating as long as coal is the dominant source of energy to produce electricity. Such a threshold is fairly easy to reach considering the coefficient of performance of the heat pumps in the market.

    Case study two investigates seawater heat pumps potential in four coastal cities from north to south China. From techno-economic perspective, in North China seawater heat pumps can save primary energy use upto 18% in space heating, and can have a discounted payback period as short as 4 years compared with coal boilers. In southern Chinese cities on the other hand, seawater heat pumps can save primary energy use upto 14% in space heating but the discounted payback period is often more than 10 years compared with status-quo system. Environmentally, in North China seawater heat pumps have to reach a critical seasonal coefficient of performance value around 2.4 to guarantee their potential in carbon emissions saving when compared with fossil fuel boilers. In South China, seawater heat pumps generally emit less greenhouse gases than competing technologies. Geographically speaking, northern coastal cities are more feasible for seawater heat pumps applications compared with southern cities, as many buildings in northern coastal cities are within a proper distance to the seawater for efficient utilization of seawater for space heating and cooling.

    Energy administration structure and energy policies in China are anavilyzed in parallel with case studies, in order to understand how energy management in China is regulated and how effective such energy policies can be. It is shown that energy administrations in China have great influence on the implementation of energy technologies and many energy policies are quite effective in promoting renewable space heating technologies.

    In conclusion, stakeholders are suggested to adopt the system method proposed in this thesis, to promote the best building heating solution based on local spatial characteristics. By using the method in case studies, it is concluded that for heat pumps, a number of prerequisites have to be fullled for a more successful application in China. Future emphasis should be placed on heat pumps efficiency improvements, operation management and cost reduction. Meanwhile, increasing the share of zero-carbon electricity in the energy system should be a long-term goal so that the environmental benefits of heat pumps can be more prominent.

  • Public defence: 2019-12-17 10:00 Sal C, Electrum, Kista
    Ghoorchian, Kambiz
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Graph Algorithms for Large-Scale and Dynamic Natural Language Processing2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In Natural Language Processing, researchers design and develop algorithms to enable machines to understand and analyze human language. These algorithms benefit multiple downstream applications including sentiment analysis, automatic translation, automatic question answering, and text summarization. Topic modeling is one such algorithm that solves the problem of categorizing documents into multiple groups with the goal of maximizing the intra-group document similarity. However, the manifestation of short texts like tweets, snippets, comments, and forum posts as the dominant source of text in our daily interactions and communications, as well as being the main medium for news reporting and dissemination, increases the complexity of the problem due to scalability, sparsity, and dynamicity. Scalability refers to the volume of the messages being generated, sparsity is related to the length of the messages, and dynamicity is associated with the ratio of changes in the content and topical structure of the messages (e.g., the emergence of new phrases). We improve the scalability and accuracy of Natural Language Processing algorithms from three perspectives, by leveraging on innovative graph modeling and graph partitioning algorithms, incremental dimensionality reduction techniques, and rich language modeling methods. We begin by presenting a solution for multiple disambiguation on short messages, as opposed to traditional single disambiguation. The solution proposes a simple graph representation model to present topical structures in the form of dense partitions in that graph and applies disambiguation by extracting those topical structures using an innovative distributed graph partitioning algorithm. Next, we develop a scalable topic modeling algorithm using a novel dense graph representation and an efficient graph partitioning algorithm. Then, we analyze the effect of temporal dimension to understand the dynamicity in online social networks and present a solution for geo-localization of users in Twitter using a hierarchical model that combines partitioning of the underlying social network graph with temporal categorization of the tweets. The results show the effect of temporal dynamicity on users’ spatial behavior. This result leads to design and development of a dynamic topic modeling solution, involving an online graph partitioning algorithm and a significantly stronger language modeling approach based on the skip-gram technique. The algorithm shows strong improvement on scalability and accuracy compared to the state-of-the-art models. Finally, we describe a dynamic graph-based representation learning algorithm that modifies the partitioning algorithm to develop a generalization of our previous work. A strong representation learning algorithm is proposed that can be used for extracting high quality distributed and continuous representations out of any sequential data with local and hierarchical structural properties similar to natural language text.

  • Public defence: 2019-12-18 09:00 Kollegiesalen, Stockholm
    Filipović, Marko
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). Bernstein Center Freiburg and Faculty of Biology, University of Freiburg, Germany.
    Characterisation of inputs and outputs of striatal medium spiny neurons in health and disease2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Striatal medium spiny neurons (MSNs) play a crucial role in various motor and cognitive functions. They are separated into those belonging to the direct pathway (dMSNs) and the indirect pathway (iMSNs) of the basal ganglia, depending on whether they express D1 or D2 type dopamine receptors, respectively. In this thesis I investigated the input processing of both MSN types, the characteristics of dMSN outputs, and the effect that aberrant iMSN activity has on the subthalamic nucleus-globus pallidus externa (STN-GPe) network.In order to verify a previous result from a computational study claiming that dMSNs should receive either more or stronger total input than iMSNs, I performed an analysis of in vivo whole-cell MSN recordings in healthy and dopamine (DA) depleted (6OHDA) anesthetized mice. To test this prediction, I compared subthreshold membrane potential fluctuations and spike-triggered average membrane potentials of the two MSN types. I found that dMSNs in healthy mice exhibited considerably larger fluctuations over a wide frequency range, as well as significantly faster  depolarization towards the spiking threshold than iMSNs. However, these effects were not present in recordings from 6OHDA animals. Together, these findings strongly suggest that dMSNs do  receive stronger total input than iMSNs in healthy condition.I also examined how different concentrations of dopamine affect neural trial-by-trial (or response) variability in a biophysically detailed compartmental model of a direct-pathway MSN.  Some of the sources of trial-by-trial variability include synaptic noise, neural refractory period, and ongoing neural activity. The focus of this study was on the effects of two particular  properties of the synaptic input: correlations of synaptic input rates, and the balance between excitatory and inhibitory inputs (E-I balance). The model demonstrates that dopamine is in  general a significant diminisher of trial-by-trial variability, but that its efficacy depends on the properties of synaptic input. Moreover, input rate correlations and changes in the E-I balance by themselves also proved to have a marked impact on the response variability.Finally, I investigated the beta-band phase properties of the STN-GPe network, known for its exaggerated beta-band oscillations during Parkinson’s disease (PD). The current state-of-the-art  computational model of the network can replicate both transient and persistent beta oscillations, but fails to capture the beta-band phase alignment between the two nuclei as seen in human  recordings. This was particularly evident during simulations of the PD condition, where STN or GPe were receiving additional stimulation in order to induce pathological levels of beta-band  activity. Here I show that by manipulating the percentage of the neurons in either population that receives stimulation it is possible to increase STN-GPe phase difference heterogeneity.  Furthermore, a similar effect can be achieved by adjusting synaptic transmission delays between the two populations. Quantifying the difference between human recordings and network  simulations, I provide the set of parameters for which the model produces the greatest correspondence with experimental results.

    The full text will be freely available from 2019-12-25 09:00
  • Public defence: 2019-12-18 09:30 T2, Huddinge
    Moustaid, Elhabib
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Perspectives on Modeling and Simulation of Urban Systems with Multiple Actors and Subsystems2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Cities are the spaces of the interaction between social, physical, political, and economic entities, which makes planning and intervening in such systems difficult. Urban systems are complex adaptive systems in that their behaviours are often the result of the interaction of their components. The growth of urban systems is driven by mass urbanization. Their complexity is the result of interactions between its constituent systems and components.

    Simulations and models as tools of exploration of urban systems face many challenges to be useful tools for intervening. Throughout the past decades, the use of simulation models focused on providing tools for managing functions and systems within metropolitan and urban environments. The cognizance of the complexity of these environments and the maturity of complexity science as a field of studying complex systems allow for the application of complexity science methods to study urban systems not only as physical systems but as social systems too.

    As learning from simulations and models can occur both at their construction and their use, this thesis focused on model and simulation building, running, and final use. The thesis takes into account two main aspects of urban systems. First, urban systems are often multi-stakeholder, that is systems where multiple stakeholders are intervening at the same time, and sometimes without clear boundaries and agency over sub-parts of the system. Second, urban systems can have a multi-subsystem structure, where each subsystem often have their objectives and affecting the rest of the system in unfamiliar ways.

    The thesis investigates through a multicase study, with three case studies, five main themes in simulation modeling that relate to increasing validity and usefulness of models for urban complex systems. Those themes are as follows; (1) the ability of simulation to be tools that capture complexity in ways that are similar to the real target systems, (2) the effects of the inclusion of experts in simulation models construction on the models, (3) the ways quantitative and qualitative ways of modeling can together make simulations and models more useful, (4) the value of simulation modeling to study connections in systems that are multi-system and multi-stakeholder, and (5) the ability to learn from models under the model building journey.

    The study cases included are modeling of a city pedestrian network, a metropolitan emergency care provision, and urban mental health dynamics. The case studies provided a diversity of system granularity. The methods used for each of the case studies have also been different in able to study different levels of inclusion of expert knowledge, data, and theoretical models.

    Besides its contribution to each of the case studies, with new models and simulation approaches, the thesis contributes to the five themes it investigated. It showed simulation modeling to be able to exhibit multiple elements of complexity. It also showed the ability of expert knowledge to help models become more useful and valid either by increasing their realism or level of representation. This result is achieved by the contextualization of the expert knowledge in the case of pedestrian modeling, and its full exploration in the mental health modeling. Furthermore, the thesis shows ways in which simulation and modeling can find and investigate bridges between urban subsystems. The outcomes suggest that simulation modeling can be a useful tool for exploring different kinds of complexity in urban systems as multi-actor and multi-system systems. Models can mirror the complexity of urban systems in their structure. They can also be ways of exploring non-intuitive behaviors and dynamics. Expert knowledge, in particular, is shown throughout the thesis to be able to help simulations achieve more validity and usefulness.

  • Public defence: 2019-12-18 10:00 F3, Stockholm
    Pålsson, Sara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Boundary integral methods for fast and accurate simulation of droplets in two-dimensional Stokes flow2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Accurate simulation of viscous fluid flows with deforming droplets creates a number of challenges. This thesis identifies these principal challenges and develops a numerical methodology to overcome them. Two-dimensional viscosity-dominated fluid flows are exclusively considered in this work. Such flows find many applications, for example, within the large and growing field of microfluidics; accurate numerical simulation is of paramount importance for understanding and exploiting them.

    A boundary integral method is presented which enables the simulation of droplets and solids with a very high fidelity. The novelty of this method is in its ability to accurately handle close interactions of drops, and of drops and solid boundaries, including boundaries with sharp corners. The boundary integral method is coupled with a spectral method to solve a PDE for the time-dependent concentration of surfactants on each of the droplet interfaces. Surfactants are molecules that change the surface tension and are therefore highly influential in the types of flow problems which are considered herein.

    A method’s usefulness is not dictated by accuracy alone. It is also necessary that the proposed method is computationally efficient. To this end, the spectral Ewald method has been adapted and applied. This yields solutions with computational cost O(N log N ), instead of O(N^2), for N source and target points.

    Together, these innovations form a highly accurate, computationally efficient means of dealing with complex flow problems. A theoretical validation procedure has been developed which confirms the accuracy of the method.

  • Public defence: 2019-12-19 10:00 Room nr: B4:1026 Code: FB42, Stockholm
    Zakomirnyi, Vadim
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Theoretical Chemistry and Biology. Siberian Federal University : Krasnoyarsk , RU.
    Multicomponent Resonant Nanostructures: Plasmonic and Photothermal Effects2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In recent decades, plasmonic nanoparticles have attracted considerable attention due to their ability to localize electromagnetic energy at a scale much smaller than the wavelength of optical radiation. The study of optical plasmon waveguides (OPWs) in the form of chains of nanoparticles is important for modern photonics. However, the widespread use of OPWs is limited due to the suppression of the resonance properties of classical plasmon materials under laser irradiation. The study of the influence of nanoparticle heating on the optical properties of waveguides and the search for new materials capable of stable functioning at high temperatures is an important task.

    In this thesis, the processes occurring during heating of plasmon nanoparticles and OPWs are studied. For this purpose, a model was developed that takes into account the heat transfer between the particles of an OPW and the environment. The calculations used temperature-dependent optical constants. As one of possible ways to avoid thermal destabilization of plasmon resonanses, new materials for OPWs formed by nanoparticles were proposed. I show that titanium nitride is a promising thermally stable material, that might be useful for manufacturing of OPWs and that works in high intensity laser radiation.

    Another hot topic at present is the study of periodic structures of resonant nanoparticles. Periodic arrays of nanoparticles have a unique feature: the manifestation of collective modes, which are formed due to the hybridization of a localized surface plasmon resonance or a Mie resonance and the Rayleigh lattice anomaly. Such a pronounced hybridization leads to the appearance of narrow surface lattice resonances, the quality factor of which is hundreds of times higher than the quality factor of the localized surface plasmon resonance alone. Structures that can support not only electric, but also magnetic dipole resonances becomes extremely important for modern photonics on chip systems. An example of a material of such particles is silicon. Using the method of generalized coupled dipoles, I studied the optical response of arrays of silicon nanoparticles. It is shown that under certain conditions, selective hybridization of only one of the dipole moments with the Rayleigh anomaly occurs.

    To analyze optical properties of intermediate sized particles with N = 103-105 atoms and diameter of particle d < 12 nm an atomistic approach, where the polarizabilities can be obtained from the atoms of the particle, could fill an important gap in the description of nanoparticle plasmons between the quantum and classical extremes. For this purpose I introduced an extended discrete interaction model where every atom makes a difference in the formation optical properties of nanoparticles within this size range. In this range are first principal approaches not applicable due to the high number of atoms and classical models based on bulk material dielectric constants are not available due to high influence from quantum size effects and corrections to the dielectric constant. To parametrize this semi-empirical model I proposed a method based on the concept of plasmon length. To evaluate the accuracy of the model, I performed calculations of optical properties of nanoparticles with different shapes: regular nanospheres, nanocubes and nanorods. Subsequently, the model was used to calculate hollow nanoparticles (nano-bubbles).

  • Public defence: 2019-12-19 10:15 F3, Stockholm
    Montecchia, Matteo
    KTH, School of Engineering Sciences (SCI), Mechanics.
    Numerical and modelling aspects of large-eddy and hybrid simulations of turbulent flows2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this study, the explicit algebraic sub-grid scale (SGS) model (EAM) has been extensively validated in wall-resolved large-eddy simulations (LES) of wall-bounded turbulent flows at different Reynolds numbers and a wide range of resolutions. Compared to eddy-viscosity based models, the formulation of the EAM is more consistent with the physics and allows to accurately capture SGS anisotropy,which is relevant especially close to walls.The present work aims to extend the validation of the EAM to larger Reynolds numbers using codes with different orders of numerical accuracy.The first simulations, performed by using a pseudo-spectral code, show that the use of the EAM, compared to the dynamic Smagorinsky model (DSM), leads to significant improvements in the prediction of the first-and second order statistics of turbulent channel flow.These improvements are observed from relatively low to  reasonably high Reynolds numbers and with coarse grids.The evaluation of the EAM was continued by implementing and testing of the EAM in the general-purpose finite-volume code OpenFOAM.Several tests of LES of turbulent channel flow have shown thatthe use of the Rhie and Chow (R&C) interpolation in OpenFOAM induces significant numerical dissipation.A new custom-built solver has been utilized in order to minimize the dissipation without generating significant adverse effects. The use of the EAM, together with the new solver, gives a substantially improved prediction of the mean velocity profiles as compared to predictions using the DSM, resulting in roughly 50% reduction in the grid point requirements to achieve a given degree of accuracy. In periodic hill flow, LES with the EAM agreed reasonably well with the reference dataat different bulk Reynolds numbers and reduced the misprediction of the first- and second order statistics observed in LES with DSM.The reduction of the R&C filter dissipation was also shown to be beneficial for the prediction of the mean quantities. An analysis of the skin friction along the lower wall reveals spanwise-elongated, almost axi-symmetric vortical structures generated by the Kelvin-Helmholtz instability. The structures introduced a significant amount of anisotropy.The last part of the study involved the development of a novel hybrid RANS-LES model where explicit algebraic Reynolds stress modelling is applied in both RANS and LES regions.Validations have been conducted on turbulent channel and periodic hill flows at different Reynolds numbers.The explicit algebraic Reynolds stress model for improved-delayed-detached-eddy simulation (EARSM-IDDES) gives reasonable predictions of the mean quantities and Reynolds stresses in both the geometries considered.The use of EARSM-IDDES, compared to the k-omega SST-IDDES model, improves the estimation of the quantities close to the wall.The present work has proven that the use of EAM in wall-resolved LES of wall-bounded flows in simple and complex geometries leads to a substantial reduction of  computational requirements both in high-accuracy and general-purpose codes, compared to the use of eddy-viscosity models.In hybrid simulations the EARSM-IDDES shows a clear potential in capturing the physics of wall-bounded flows.

  • Public defence: 2020-01-09 10:00 F3, Stockholm
    Aguilar, Xavier
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Performance Monitoring, Analysis, and Real-Time Introspection on Large-Scale Parallel Systems2020Doctoral thesis, monograph (Other academic)
    Abstract [en]

    High-Performance Computing (HPC) has become an important scientific driver. A wide variety of research ranging for example from drug design to climate modelling is nowadays performed in HPC systems. Furthermore, the tremendous computer power of such HPC systems allows scientists to simulate problems that were unimaginable a few years ago. However, the continuous increase in size and complexity of HPC systems is turning the development of efficient parallel software into a difficult task. Therefore, the use of per- formance monitoring and analysis is a must in order to unveil inefficiencies in parallel software. Nevertheless, performance tools also face challenges as a result of the size of HPC systems, for example, coping with huge amounts of performance data generated.

    In this thesis, we propose a new model for performance characterisation of MPI applications that tackles the challenge of big performance data sets. Our approach uses Event Flow Graphs to balance the scalability of profiling techniques (generating performance reports with aggregated metrics) with the richness of information of tracing methods (generating files with sequences of time-stamped events). In other words, graphs allow to encode ordered se- quences of events without storing the whole sequence of such events, and therefore, they need much less memory and disk space, and are more scal- able. We demonstrate in this thesis how our Event Flow Graph model can be used as a trace compression method. Furthermore, we propose a method to automatically detect the structure of MPI applications using our Event Flow Graphs. This knowledge can afterwards be used to collect performance data in a smarter way, reducing for example the amount of redundant data collected. Finally, we demonstrate that our graphs can be used beyond trace compression and automatic analysis of performance data. We propose a new methodology to use Event Flow Graphs in the task of visual performance data exploration.

    In addition to the Event Flow Graph model, we also explore in this thesis the design and use of performance data introspection frameworks. Future HPC systems will be very dynamic environments providing extreme levels of parallelism, but with energy constraints, considerable resource sharing, and heterogeneous hardware. Thus, the use of real-time performance data to or- chestrate program execution in such a complex and dynamic environment will be a necessity. This thesis presents two different performance data introspec- tion frameworks that we have implemented. These introspection frameworks are easy to use, and provide performance data in real time with very low overhead. We demonstrate, among other things, how our approach can be used to reduce in real time the energy consumed by the system.

    The approaches proposed in this thesis have been validated in different HPC systems using multiple scientific kernels as well as real scientific applica- tions. The experiments show that our approaches in performance character- isation and performance data introspection are not intrusive at all, and can be a valuable contribution to help in the performance monitoring of future HPC systems.

  • Public defence: 2020-01-10 14:00 Kollegiesalen, Stockholm
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Diverse Sounds: Enabling Inclusive Sonic Interaction2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis collects a series of publications on designing sonic interactions for diversity and inclusion. The presented papers focus on case studies in which musical interfaces were either developed or reviewed. While the described studies are substantially different in their nature, they all contribute to the thesis by providing reflections on how musical interfaces could be designed to enable inclusion rather than exclusion. Building on this work, I introduce two terms: inclusive sonic interaction design and Accessible Digital Musical Instruments (ADMIs). I also define nine properties to consider in the design and evaluation of ADMIs: expressiveness, playability, longevity, customizability, pleasure, sonic quality, robustness, multimodality and causality. Inspired by the experience of playing an acoustic instrument, I propose to enable musical inclusion for under-represented groups (for example persons with visual- and hearing-impairments, as well as elderly people) through the design of Digital Musical Instruments (DMIs) in the form of rich multisensory experiences allowing for multiple modes of interaction. At the same time, it is important to enable customization to fit user needs, both in terms of gestural control and provided sonic output. I conclude that the computer music community has the potential to actively engage more people in music-making activities. In addition, I stress the importance of identifying challenges that people face in these contexts, thereby enabling initiatives towards changing practices.

  • Public defence: 2020-01-17 14:00 F3, Stockholm
    Vallejos, Pablo
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Fusion Plasma Physics.
    Modeling RF waves in hot plasmas using the finite element method and wavelet decomposition: Theory and applications for ion cyclotron resonance heating in toroidal plasmas2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Fusion energy has the potential to provide a sustainable solution for generating large quantities of clean energy for human societies. The tokamak fusion reactor is a toroidal device where the hot ionized fuel (plasma) is confined by magnetic fields. Several heating systems are used in order to reach fusion relevant temperatures. Ion cyclotron resonance heating (ICRH) is one of these systems, where the plasma is heated by injecting radio frequency (RF) waves from an antenna located outside the plasma.

    This thesis concerns modeling of RF wave propagation and damping in hot tokamak plasmas. However, solving the wave equation is complicated because of spatial dispersion. This effect makes the wave equation an integro-differential equation that is difficult to solve using common numerical tools. The objective of this thesis is to develop numerical methods that can handle spatial dispersion and account for the geometric complexity outside the core plasma, such as the antenna and low-density regions (or SOL). The main results of this work is the development of the FEMIC code and the so-called iterative wavelet finite element scheme.

    FEMIC is a 2D axisymmetric code based on the finite element method. Its main feature is the integration of the core plasma with the SOL and antenna regions, where arbitrary geometric complexity is allowed. Moreover, FEMIC can apply a dielectric response in the SOL and in the region between the SOL and the core plasma (i.e. the pedestal). The code can account for perpendicular spatial dispersion (or FLR effects) for the fast wave only, which is sufficient for modeling harmonic cyclotron damping and transit time magnetic pumping. FEMIC was used for studying the effect of poloidal phasing on the ICRH power deposition on JET and ITER, and was benchmarked against other ICRH modeling codes in the fusion community successfully.

    The iterative wavelet finite element scheme was developed in order to account for spatial dispersion in a rigorous way. The method adds spatial dispersion effects to the wave equation by using a fixed point iteration scheme. Spatial dispersion effects are evaluated using a novel method based on Morlet wavelet decomposition. The method has been tested successfully for parallel and perpendicular spatial dispersion in one-dimensional models. The FEMIC1D code was developed in order to model ICRH and to study the properties of the numerical scheme. FEMIC1D was used to study second harmonic heating and mode conversion to ion-Bernstein waves (IBW), including a model for the SOL and pedestal. By studying the propagation and damping of the IBW, we verified that the scheme can account for FLR effects.

  • Public defence: 2020-01-27 13:00 F3, Stockholm
    Rubensson, Isak
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, System Analysis and Economics. Trafikförvaltningen, Region Stockholm.
    Making Equity in Public Transport Count2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Political and public focus on equity and justice outcomes of public policies is on the rise all over the world. Equity is both philosophically motivated and often decreed by law and in planning directives to be monitored when policies are changed, however oftentimes these equity assessments are vague, qualitative and carries low weight in policy decision processes. For the public transport administrator, all decisions on operations, fare management and subsidies have distributional consequences forming the equity outcomes of public transport provision. In this thesis distributional outcomes of public transport subsidies, fare schemes, transport quality provision and public transport accessibility are studied quantitatively. New methodology is developed with regard to assignment of subsidy level per individual trip, graphics on geographical fare distribution and a measure of vertical distribution. Some findings are that public transport subsidies have low horizontal but high vertical equity, that flat fares – contrary to much of the literature- have high vertical equity when cities have high income residents living centrally. Women place higher weight on crowding as a quality issue, older passengers put both higher weight and higher satisfaction on low time variability while young passengers are less satisfied with and places lower weight on personnel attitude. And that accessibility, controlled for how densely populated and central the residence-area is, has a vertically equitable distribution.

  • Public defence: 2020-01-31 14:00 K1, Stockholm
    Winberg-Wang, Helen
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemical Engineering, Chemical Engineering.
    Water density impact on water flow and mass transport in rock fractures2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    One way of taking care of spent nuclear fuel is to place it in a geological repository. In Sweden, a three-barrier system is planned. The system is based on encapsulating the fuel in copper canisters. These are surrounded by bentonite clay and buried under 500 m of bedrock. As a part of the safety assessment, the Q-equivalent model is used to quantify the possible release of radioactive material. This model also describes the rate at which corrosive agents carried by seeping water in rock fractures can reach the canisters, which may affect the longevity of the canisters.

    The aim of this thesis was originally to develop an experimental, phys- ical model to visualize and validate the Q-equivalent model. However, the overarching theme of this work has been to study the effect of minor density differences that might be overlooked in experiments, both concentration- dependent and density-difference induced by light absorption.

    In the initial diffusion and flow-experiment and associated calculations and simulations, it was found that simple Q-equivalent can describe and quantify the mass transport in both parallel and variable aperture fractures. However, this is the case only if the density difference between seeping water and clay pore water is insignificant. It was found in experiments with dyes used to visualise the flow and diffusion patterns that even minimal density differences could significantly alter the flow pattern. Density differences can result from concentration gradients or be induced by light absorption. TheQ-equivalent model was extended to account for density-induced flow. The importance of density-induced flow due to concentration gradients at the setting of a long-term repository for nuclear waste was evaluated. It was found that concentration gradients are able to induce rapid vertical up- or downward flow. This could increase the overall mass transport of radioactive material up to the biosphere or carry it downward to larger depths.

  • Public defence: 2020-01-31 14:00 F3, Stockholm
    Jawerth, Marcus
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology, Coating Technology. KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Centres, Wallenberg Wood Science Center. KTH Royal Institute of Technology.
    Thermoset resins using technical lignin as a base constituent2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The need to find sustainable paths for our society is imminent to tackle environmental concerns of today. The majority of all plastic materials are produced from crude oil but in the future a much larger portion must originate from renewable resources to address some of these problems. Aromatic molecules are often used when producing rigid and thermally stable polymeric materials but there are few natural sources for them. One is, however, the wood component lignin that is produced on a large scale from chemical pulping processes of biomass. Lignins aromatic structures could be an alternative for non-renewable aromatics in e.g. thermoset applications.

    The heterogeneity of lignin does however present some problems in terms of e.g. dispersity, solubility, diverse functionality, and varying polymer backbone structure. To tackle these challenges, work-up of lignin and thorough characterization are important to be able to produce materials with predetermined, predictable, properties. Technical lignins have functional groups that can be utilized as chemical handles for further modifications required for different material systems e.g. phenols, aliphatic hydroxyls, and carboxylic acids.

    This thesis focuses on how to utilize solvent fractionated, relatively well-characterized, LignoBoost Kraft lignin to produce thermoset resins by chemical modification and a crosslinking procedure. An efficient procedure to selectively allylate the phenolics, the most abundant functionality, of the lignin fractions has been developed and evaluated as well as a curing procedure using a thiol crosslinker and a thiol-ene reaction. The produced materials were analysed with regards to material properties, density, and morphology. The resins based on the selectively allylated lignin fractions were furthermore evaluated as a potential matrix for carbon fibre composites. It was shown that the material samples could be processed by pre-impregnating carbon fibres and form composite materials. The molecules of the lignin fraction were also used as core substrates in a ring-opening polymerization to produce functional star co-polymers. The procedure was evaluated and it could be shown that the lignin backbone was subjected to substantial structural changes of lignin inter-unit linkages.

    Lignin being one of the few large resources of naturally occurring aromatics has a big potential to be used for material applications where rigidity and thermal stability is important. This thesis attempts to add a few pieces towards such a goal.

    The full text will be freely available from 2020-06-01 09:38