1 - 36 of 36
rss atomLink to result list
Permanent link
  • Public defence: 2016-09-29 09:00 F 3, Stockholm
    Novotny, Michael
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Novotny, Michael
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Breaking the chains: A technological and industrial transformation beyond papermaking: Technology management of incumbents2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In recent years, the necessity and opportunity for transforming pulp and paper mills into integrative units for large-scale output of biochemicals, biomaterials, and biofuels have come up in discussions of industrial renewal in the Northern hemisphere (mainly in Canada, Sweden and Finland). This transformation is related to technology shifts as well as changing business models based on new bioproducts due to profoundly new market conditions. The aim of this dissertation is to analyse how wood-based biomass industries – with an emphasis on incumbent pulp and paper industries (PPIs) – are managing this industrial and technological transformation that is taking place beyond the papermaking paradigm. Innovation theories on mature industries, their incumbents, and their propensity for technological lock-in and inertia are well-known. How new entrants and incumbents manage these large shifts is seen as central in understanding the dynamics of new, large-scale sustainable technologies on the one hand and the renewal of large, mature process industries on the other. Three research questions are addressed. First, where are the knowledge and technology frontiers developing in this transformation? Second, how are incumbents of PPIs are managing large market and technology shifts based on existing capabilities and knowledge bases? Third, what are the key mechanisms behind the transformation of PPIs from a process-industry perspective? The hermeneutical insights into the system of biomass technologies in general and the PPI industries in particular were gained by using a qualitative case-study approach, which formed the basis for four research articles and for outlining the empirical context and key words search of the quantitative bibliometric methods in a fifth research article. The research findings and main contributions address an identification of the, analytical, “formal”, science-based technology frontiers from a knowledge base perspective.  Old industrialised forest/PPI nations tended to specialize in rather slow growing, forest-based frontiers. They seem to have stayed close to the research trajectories of their woody raw material and knowledge base with the exception of North America. However, this not the entire explanation of transformation and technology development. Chemical pulp mills, in several cases developed into biorefineries, are the nexus of the emerging development block. They are contributing with products in a bioeconomy that is actively moving away from fossils and polluting materials (such as cement, cotton, plastics). In addition, demo plants (potentially nurturing hundreds of bioproducts) that are present at mill sites and involve different stakeholders, can act as the interface between analytical and synthetic knowledge bases that otherwise are difficult to combine in the upscaling phases of process industries. The response of PPI organizations to shifts in both technology and business models is also explained by the concept of diverging innovations of non-assembled products. These are part of a diversification of an industry from a forest industry perspective, and also of a diversification that may enter trajectories of several by-products and side-streams of the pulp “biorefinery” mill, and have analogies to a product-tree and to the material transformation flow of its production systems. But it is also a phenomenon of synergies in a broader multi-sectorial perspective, i.e. new sets of related products/processes that are able to replace industries of non-assembled products under the above-mentioned, new market conditions. The phenomenon of diverging innovations can be regarded as both an empirical contribution – the breaking up of a closed integrated process industry into something new with several emerging and integrative industries as a response to the large shifts in papermaking and sustainable needs in society – and as a theoretical remark on the model for non-assembled products presented by Utterback (1994).

  • Public defence: 2016-09-29 09:00 Kollegiesalen, Stockholm
    Westman, Jonas
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Westman, Jonas
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Specifying Safety-Critical Heterogeneous Systems Using Contracts Theory2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Requirements engineering (RE) is a well-established practice that is also emphasized in safety standards such as IEC 61508 and ISO 26262. Safety standards advocate a particularly stringent RE where requirements must be structured in an hierarchical manner in accordance with the system architecture; at each level, requirements must be allocated to heterogeneous (SW, HW, mechanical, electrical, etc.) architecture elements and trace links must be established between requirements. In contrast to the stringent RE in safety standards, according to previous studies, RE in industry is in general of poor quality. Considering a typical RE tool, other than basic impact analysis, the tool neither gives feedback nor guides a user  when specifying, allocating, and structuring requirements. In practice, for industry to comply with the stringent RE in safety standards, better support for RE is needed, not only from tools, but also from principles and methods.

    Therefore, a foundation is presented consisting of an underlying theory for specifying heterogeneous systems and complementary principles and methods to specifically support the stringent RE in safety standards. This foundation is indeed suitable as a base for implementing guidance- and feedback-driven tool support for such stringent RE; however, the fact is that the proposed theory, principles, and methods provide essential support  regardless if tools are used or not.

    The underlying theory is a formal compositional contracts theory for heterogeneous systems. This contracts theory embodies the essential RE property of separating requirements on a system from assumptions on its environment. Moreover, the contracts theory formalizes the stringent RE effort of structuring requirements hierarchically with respect to the system architecture. Thus, the proposed principles and methods for supporting the stringent RE in safety standards are well-rooted in formal concepts and conditions, and are thus, theoretically sound. Not only that, but the foundation is indeed also tailored to be enforced by both existing and new tools considering that the support is based on precise mathematical expressions that can be interpreted unambiguously by machines. Enforcing the foundation in a tool entails support that guides and gives feedback when specifying heterogeneous systems in general, and safety-critical ones in particular.

  • Public defence: 2016-09-29 10:00 Sal A, Kista
    Afrasiabi, Roodabeh
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics.
    Afrasiabi, Roodabeh
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics.
    Silicon Nanoribbon FET Sensors: Fabrication, Surface Modification and Microfluidic Integration2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Over the past decade, the field of medical diagnostics has seen an incredible amount of research towards the integration of one-dimensional nanostructures such as carbon nanotubes, metallic and semiconducting nanowires and nanoribbons for a variety of bio-applications. Among the mentioned one-dimensional structures, silicon nanoribbon (SiNR) field-effect transistors (FET) as electro-chemical nanosensors hold particular promise for label-free, real-time and sensitive detection of biomolecules using affinity-based detection. In SiNR FET sensors, electrical transport is primarily along the nanoribbon axis in a thin sheet (< 30 nm) serving as the channel. High sensitivity is achieved because of the large surface-to-volume ratio which allows analytes to bind anywhere along the NR affecting the entire conductivity by their surface charge. Unfortunately, sensitivity without selectivity is still an ongoing issue and this thesis aims at addressing the detection challenges and further proposing effective developments, such as parallel and multiple detection through using individually functionalized SiNRs.We present here a comprehensive study on design, fabrication, operation and device performance parameters for the next generation of SiNR FET sensors towards multiplexed, label-free detection of biomolecules using an on-chip microfluidic layer which is based on a highly cross-linked epoxy. We first study the sensitivity of different NR dimensions followed by analysis of the drift and hysteresis effects. We have also addressed two types of gate oxides (namely SiO2 and Al2O3) which are commonly used in standard CMOS fabrication of ISFETs (Ion sensitive FET). Not only have we studied and compared the hysteresis and response-time effects in the mentioned two types of oxides but we have also suggested a new integrated on-chip reference nanoribbon/microfluidics combination to monitor the long-term drift in the SiNR FET nanosensors. Our results show that compared to Al2O3, silicon-oxide gated SiNR FET sensors show high hysteresis and slow-response which limit their performance only to background electrolytes with low ionic strength. Al2O3 on the other hand proves more promising as the gate-oxide of choice for use in nanosensors. We have also illustrated that the new integrated sensor NR/Reference NR can be utilized for real-time monitoring of the above studied sources of error during pH-sensing. Furthermore, we have introduced a new surface silanization (using 3-aminopropyltriethoxysilane) method utilizing microwave-assisted heating which compared to conventional heating, yields an amino-terminated monolayer with high surface coverage on the oxide surface of the nanoribbons. A highly uniform and dense monolayer not only reduces the pH sensitivity of the bare-silicon oxide surface in a physiological media but also allows for more receptors to be immobilized on the surface. Protocols for surface functionalization and biomolecule immobilization were evaluated using model systems. Selective spotting of receptor molecules can be used to achieve localized functionalization of individual SiNRs, opening up opportunities for multiplexed detection of analytes.Additionally, we present here a novel approach by integrating droplet-based microfluidics with the SiNR FET sensors. Using the new system we are able to successfully detect trains of droplets with various pH values. The integrated system enables a wide range of label-free biochemical and macromolecule sensing applications based on detection of biological events such as enzyme-substrate interactions within the droplets.

  • Public defence: 2016-09-29 13:00 Seminar room Earth, Solna
    Xu, Hao
    KTH, School of Engineering Sciences (SCI), Applied Physics.
    Xu, Hao
    KTH, School of Engineering Sciences (SCI), Applied Physics.
    Fluorescence Properties of Quantum Dots and Their Utilization in Bioimaging2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Quantum dots (QDs), especially colloidal semiconductor QDs, possess properties including high quantum yields, narrow fluorescence spectra, broad absorption and excellent photostability, making them extremely powerful in bioimaging. In this thesis, we studied the fluorescence properties of QDs and attempted multiple ways to boost applications of QDs in bioimaging field.

    By time-correlated single photon counting (TCSPC) measurement, we quantitatively interpreted the fluorescence mechanism of colloidal semiconductor QDs.

    To enhance QD fluorescence, we used a porous alumina membrane as a photonic crystal structure to modulate QD fluorescence.

    We studied the acid dissociation of 3-mercaptopropionic acid (MPA) coated QDs mainly through electrophoretic mobility of 3-MPA coated CdSe QDs and successfully demonstrated the impact of pH change and Ca2+ ions.

    Blinking phenomena of both CdSe-CdS/ZnS core-shell QDs and 3C-SiC nanocrystals (NCs) were studied. A general model on blinking characteristics relates the on-state distribution to CdSe QD surface conditions. The energy relaxation pathway of fluorescence of 3C-SiC NCs was found independent of surface states.

    To examine QD effect on ciliated cells, we conducted a 70-day long experiment on the bioelectric and morphological response of human airway epithelial Calu-3 cells with periodic deposition of 3-MPA coated QDs and found the cytotoxicity of QDs was found very low.

    In a brief summary, our study of QD could benefit in bioimaging and biosensing. Especially, super-resolution fluorescent bioimaging, such as, stochastic optical reconstruction microscopy (STORM) and photo-activated localization microscopy (PALM), may benefit from the modulation of the QD blinking in this study. And fluorescence lifetime imaging (FLIM) microscopy could take advantage of lifetime modulation based on our QD lifetime study.

  • Public defence: 2016-09-29 13:00 F3, Stockholm
    Razola, Mikael
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, Naval Systems.
    Razola, Mikael
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, Naval Systems.
    New Perspectives on Analysis and Design of High-Speed Craft with Respect to Slamming2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    High-speed craft are in high demand in the maritime industry, for example, in maintenance operations for offshore structures, for search and rescue, for patrolling operations, or as leisure craft to deliver speed and excitement. Design and operation of high-speed craft are often governed by the hydrodynamic phenomena of slamming, which occur when the craft impact the wave surface. Slamming loads affect the high-speed craft system; the crew, the structure and various sub-systems and limit the operation. To meet the ever-increasing demands on safety, economy and reduced environmental impact, there is a need to develop more efficient high-speed craft. This progression is however limited by the prevailing semi-empirical design methods for high-speed planing craft structures. These methods provide only a basic description of the involved physics, and their validity has been questioned.

    This thesis contributes to improving the conditions for designing efficient highspeed craft by focusing on two key topics: evaluation and development of the prevailing design methods for high-speed craft structures, and development towards structural design based on first principles modeling of the slamming process. In particular a methodological framework that enables detailed studies of the slamming phenomena using numerical simulations and experimental measurements is synthesized and evaluated. The methodological framework involves modeling of the wave environment, the craft hydromechanics and structural mechanics, and statistical characterization of the response processes. The framework forms the foundation for an extensive evaluation and development of the prevailing semi-empirical design methods for high-speed planing craft. Through the work presented in this thesis the framework is also shown to be a viable approach in the introduction of simulation-based design methods based on first principles modeling of the involved physics. Summarizing, the presented methods and results provide important steppingstones towards designing more efficient high-speed planing craft.

  • Public defence: 2016-09-30 09:00 Kollegiesalen, Stockholm
    Liu, Xuejin
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Liu, Xuejin
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Characterization and Energy Calibration of a Silicon-Strip Detector for Photon-Counting Spectral Computed Tomography2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Multibin photon-counting x-ray detectors are promising candidates to be applied in next generation computed tomography (CT), whereby energy information from a broad x-ray spectrum can be extracted and properly used for improving image quality and correspondingly reducing radiation dose. A silicon-strip detector has been developed for spectral CT, which operates in photon-counting mode and allows pulse-height discrimination with 8 adjustable energy bins.

    Critical characteristics, energy resolution and count-rate performance, of the detector are evaluated. An absolute energy resolution (E) from 1.5 keV to 1.9 keV with increasing x-ray energy from 40 keV to 120 keV is found. Pulse pileup degrades the energy resolution by 0.4 keV when increasing the input count rate from zero to 100 Mcps mm−2, while charge sharing shows negligible effect. A near linear relationship between the input and output count rates is observed up to 90 Mcps mm−2 in a clinical CT environment. In addition, no saturation effect appears for the maximally achieved photon flux of 485 Mphotons s−1 mm−2 with a count rate loss of 30%.

    The detector is energy calibrated in terms of gain and offset with the aid of monoenergetic x rays. The gain variation among channels is below 4%, whereas the variation of offsets is on the order of 1 keV. In order to do the energy calibration in a routinely available way, a method that makes use of the broad x-ray spectrum instead of using monoenergetic x rays is proposed. It is based on a regression analysis that adjusts a modelled spectrum of deposited energies to a measured pulse-height spectrum. Application of this method shows high potential to be applied in an existing CT scanner with an uncertainty of a calibrated threshold between 0.1 and 0.2 keV.

    The energy-calibration method is further used in the development of a spectral response model of the detector. This model is used to accurately bin-wise predict the response of each detector channel, which is validated by two application examples. First, the model is used in combination with the inhomogeneity compensation method to eliminate ring artefacts in CT images. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. Additionally, the contrast agent concentrations are reconstructed with more than 94% accuracy.

  • Public defence: 2016-09-30 10:00 L1, KTH, Stockholm
    West, Jens
    KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport Planning, Economics and Engineering. Sweco, Sweden.
    West, Jens
    KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport Planning, Economics and Engineering. Sweco, Sweden.
    Modelling and Appraisal in Congested Transport Networks2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Appraisal methodologies for congestion mitigation projects are relatively less well developed compared to methodologies for projects reducing free flow travel times. For instance, static assignment models are incapable of representing the build-up and dissipation of traffic queues, or capturing the experienced crowding caused by uneven on-board passenger loads. Despite the availability of dynamic traffic assignment, only few model systems have been developed for cost-benefit analysis of real applications. The six included papers present approaches and tools for analysing traffic and transit projects where congestion relief is the main target.

    In the transit case studies, we use an agent-based simulation model to analyse congestion and crowding effects and to conduct cost-benefit analyses. In the case study of a metro extension in Stockholm, we demonstrate that congestion and crowding effects constitute more than a third of the total benefits and that a conventional static model underestimates these effects vastly. In another case study, we analyse various operational measures and find that the three main measures (boarding through all doors, headway-based holding and bus lanes) had an overall positive impact on service performance and that synergetic effects exist.

    For the congestion charging system in Gothenburg, we demonstrate that a hierarchal route choice model with a continuous value of time distribution gives realistic predictions of route choice effects although the assignment is static. We use the model to show that the net social benefit of the charging system in Gothenburg is positive, but that low income groups pay a larger share of their income than high income groups. To analyse congestion charges in Stockholm however, integration of dynamic traffic assignment with the demand model is necessary, and we demonstrate that this is fully possible.

    Models able to correctly predict these effects highlight the surprisingly large travel time savings of pricing policies and small operational measures. These measures are cheap compared to investments in new infrastructure and their implementation can therefore lead to large societal gains.

  • Public defence: 2016-09-30 10:00 M312, Stockholm
    Liu, Hailong
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Applied Process Metallurgy.
    Liu, Hailong
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Applied Process Metallurgy.
    A Study of the Particle Transport Behavior in Enclosed Environments2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The main purpose of the present work is to increase the fundamental understanding of the particle transport behavior in an enclosed environment and to provide knowledge to the estimate and measure the particle emission from pellets during a steel production process.

    A laboratory study focused on the effect of the high sliding velocity on the particle generation from dry sliding wheel-rail contacts has been conducted. The particle concentration and the size distribution were acquired online by using particle number counters during the tests. After the completion of each test, the characteristics of pin worn surfaces and collected particles were analyzed with the aid of SEM (scanning electron microscopy) combined with EDS (energy disperse X-ray analysis). The results show that the amount of the particle generation increases significantly as the sliding velocity increases from 0.1 to 3.4 m/s. Moreover, the particle size distribution results indicate that the majority of the generated particles are submicron (ultrafine and fine) particles in the case of a high sliding velocity (1.2 and 3.4 m/s). The observations of iron oxide layers within the pin worn surface and the collected iron-oxide containing particles reveal that these substantial small particles can be attributed to an oxidative wear between the dry sliding wheel-rail contacts under high sliding velocities.

    The effect of the particle transport behavior with respect to submicron particles in the test chamber on the measurements taken at the outlet was studied by a three dimensional mathematical model. With the assistance of CFD (computational fluid dynamics) simulations, the airflow pattern was found to have a major effect on the particle transport during the tests. By estimating the particle loss rate, 30% of generated particles failed to be captured at the outlet. The reason for that could be a temporary suspension and a deposition onto the surfaces. It should be noted that the particles were assumed to follow the air stream as a result of the small particle size. In addition, the Lagrangian tracking results reveal that the limiting size for particles to become airborne during tests is around 10 µm. However, the computational cost is found to be significant high when the Lagrangian method is adopted.

    To consider the measurements of micron particles and to reduce the computational time, a coupled drift flux and Eulerian deposition model was developed. In this model, the effects of the gravitational sedimentation and deposition on the particle dispersion were included. The simulation results are in a good agreement with the available experimental data. The value of APD (average percentage deviation) is in the range of 7.7% to 21.2%. Therefore, a set of simulation cases have been carried out to investigate the influential factors (particle size, wall roughness, source location and duration). The results show that the homogeneity of the particle concentration distribution in the model room declines with an increased particle size (0.01 to 10 µm). An almost uniform particle concentration field is formed for submicron particles (0.01 and 0.1 µm) and for fine particles (1 and 2 µm). However, a clear concentration gradient is obtained for coarse particles (4, 6, 8 and 10 µm). This is due to that the gravitational settling dominates the motion of coarse particles. As a result, a large deposited amount and a high deposition fraction was predicted for coarse particles. Moreover, the surface roughness was found to enhance the deposition of submicron particles (0.1 and 0.01 µm) for a given friction velocity. On the contrary, the deposition of micron particles is much less sensitive to the variation of the surface roughness. For a case of an internal source in the room, where a release over a long duration is considered, the particle dispersion strongly depends on the release location. However, this is not the case for a short release time.

    The dispersions and depositions of micron particles were explored in a laboratory test focused on the particle emission from the wear between the pellets. The simulation results were compared to the measured data with respect to the particle flux at the outlet. A good agreement (4.92% < APD < 12.02%) is obtained. In addition, the influence of the air flow rate at the inlet and the particle size on the sampling results at the outlet was investigated carefully. The results show that a stronger air supply at the inlet can push more particles to the outlet for any given particle sizes. However, the resulted increase of the measurable fraction is more significant for 4, 6, 8 10 µm particles compared to 1, 2 and 20 µm particles. Moreover, it is apparent that 20 µm particles are unable to be measured in such a measurement system.    

    The full text will be freely available from 2016-11-30 10:00
  • Public defence: 2016-09-30 10:00 M2, Stockholm
    Kanje, Sara
    KTH, School of Biotechnology (BIO), Protein Technology.
    Kanje, Sara
    KTH, School of Biotechnology (BIO), Protein Technology.
    Engineering of small IgG binding domains for antibody labelling and purification2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In protein engineering, rational design and selection from combinatorial libraries are methods used to develop proteins with new or improved features. A very important protein for the biological sciences is the antibody that is used as a detecting agent in numerous laboratory assays. Antibodies used for these purposes are often ”man-made”, by immunising animals with the desired target, or by selections from combinatorial libraries. Naturally, antibodies are part of the immune defence protecting us from foreign attacks from e.g. bacteria or viruses. Some bacteria have evolved surface proteins that can bind to proteins abundant in the blood, like antibodies and serum albumin. By doing so, the bacteria can cover themselves in the host’s own proteins and through that evade being detected by the immune system. Two such proteins are Protein A from Staphylococcus aureus and Protein G from group C and G Streptococci. Both these proteins contain domains that bind to antibodies, one of which is denoted C2 (from Protein G) and another B (from Protein A). The B domain have been further engineered to the Z domain.

    In this thesis protein engineering has been used to develop variants of the C2 and Z domains for site-specific labelling of antibodies and for antibody purification with mild elution. By taking advantage of the domains’ inherent affinity for antibodies, engineering and design of certain amino acids or protein motifs of the domains have resulted in proteins with new properties. A photo crosslinking amino acid, p-benzoylphenylalanine, have been introduced at different positions to the C2 domain, rendering three new protein domains that can be used for site-specific labelling of antibodies at the Fc or Fab fragment. These domains were used for labelling antibodies with lanthanides and used for detection in a multiplex immunoassay. Moreover, a library of calcium-binding loops was grafted onto the Z domain and used for selection of a domain that binds antibodies in a calcium dependent manner. This engineered protein domain can be used for the purification of antibodies using milder elution conditions, by calcium removal, as compared to traditional antibody purification. 

  • Public defence: 2016-09-30 10:00 Sal B, Kista
    Noroozi, Mohammad
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Functional Materials, FNM.
    Noroozi, Mohammad
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Functional Materials, FNM.
    Growth, processing and characterization of group IV materials for thermoelectric applications2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Discover of new energy sources and solutions are one of the important global issues nowadays, which has a big impact on economy as well as environment. One of the methods to help to mitigate this issue is to recover wasted heat, which is produced in large quantities by the industry, through vehicle exhausts and in many other situations where we consume energy. One way to do this would be using thermoelectric (TE) materials, which enable direct interconversion between heat and electrical energy. This thesis investigates how the novel material combinations and nanotechnology could be used for fabricating more efficient TE materials and devices.

    The work presents synthesis, processing, and electrical characterization of group IV materials for TE applications. The starting point is epitaxial growth of alloys of group IV elements, silicon (Si), germanium (Ge) and tin (Sn), with a focus on SiGe and GeSn(Si) alloys. The material development is performed using chemical vapor deposition (CVD) technique. Strained and strain-relaxed Ge1-x Snx (0.01≤x≤0.15) has been successfully grown on Ge buffer and Si substrate, respectively. It is demonstrated that a precise control of temperature, growth rate, Sn flow and buffer layer quality is necessary to overcome Sn segregation and achieve a high quality GeSn layer. The incorporation of Si and n- and p-type dopant atoms is also investigated and it was found that the strain can be compensated in the presence of Si and dopant atoms. 

    Si1-xGexlayers are grown on Si-on-insulator wafers and condensed by oxidation at 1050 ᵒC to manufacture SiGe-on-insulator (SGOI) wafers. Nanowires (NWs) are processed, either by sidewall transfer lithography (STL), or by using conventional lithography, and subsequently manufactured into nanoscale dimensions by focused ion beam (FIB) technique. The NWs are formed in an array, where one side is heated by a resistive heater made of Ti/Pt. The power factor of NWs is measured and the results are compared for NWs manufactured by different methods. It is found that the electrical properties of NWs fabricated with FIB technique can be influenced due to Ga doping during ion milling.

    Finally, the carrier transport in SiGe NWs formed on SGOI samples is tailored by applying a back-gate voltage on the Si substrate. In this way, the power factor is improved by a factor of 4. This improvement is related to the presence of defects and/or small fluctuation of nanowire shape and Ge content along the NWs, generated during processing and condensation of SiGe layers. The SiGe results open a new window for operation of SiGe NWs-based TE devices in the new temperature range of 250 to 450 K.

  • Public defence: 2016-09-30 10:00 F3, Stockholm
    Liu, Dongming
    KTH, School of Chemical Science and Engineering (CHE), Fibre and Polymer Technology, Polymeric Materials.
    Liu, Dongming
    KTH, School of Chemical Science and Engineering (CHE), Fibre and Polymer Technology, Polymeric Materials.
    Polyethylene – metal oxide particle nanocomposites for future HVDC cable insulation: From interface tailoring to designed performance2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Low-density polyethylene (LDPE) nanocomposites containing metal oxide nanoparticles are considered as promising candidates for insulating materials in future high-voltage direct-current (HVDC) cables. The significant improvement in dielectric properties compared with unfilled polymer is attributed to the large and active interface between the nanoparticles and the polymer. The nanoparticles may also initiate cavitation under stress and potential accelerated aging risks due to the adsorption and inactivation of the protecting antioxidants. This study is focused on the possibilities of achieving well-balanced performances of the polyethylene nanocomposites for HVDC insulation via tailoring the particle interface in the nanocomposites.

    A facile and versatile surface coating method for metal oxide particles was developed based on silane chemistry. The developed method was successfully applied to 8.5 nm Fe3O4, 25 nm ZnO and 50 nm Al2O3 particles, with the aim to develop uniform coatings that universally could be applied on individual particles rather than aggregates of particles. The surface properties of the coatings were further tailored by applying silanes with terminal alkyl groups of different lengths, including methyl (C1-), octyl (C8-) and octadecyl (C18-) units. Transmission electron microscopy, infrared spectroscopy and thermal gravimetric analysis confirmed the presence of uniform coatings on the particle surface and importantly the coatings were found to be highly porous.

    The capacity of metal oxide particles to adsorb relevant polar species (e.g. moisture, acetophenone, cumyl alcohol and phenolic antioxidant) was further assessed due to its potential impact on electrical conductivity and long-term stability of the nanocomposites. The oxidative stability of the nanocomposites was affected by the adsorption of phenolic antioxidants on particles and transfer of catalytic impurities (ionic species) from metal oxide particles to polymer matrix. It was found that carefully coated metal oxide particles had much less tendency to adsorb antioxidants. They could, however, adsorb moisture, acetophenone and cumyl alcohol. The coated particles did not emit any destabilizing ionic species into the polymer matrix. 

    The inter-particle distance of the nanocomposites based on C8-coated nanoparticles showed only a small deviation from the ideal, theoretical value, indicating a good particle dispersion in the polymer. Scanning electron microscopy of strained nanocomposite samples suggested the cavitation mainly occurred at the polymer/nanoparticles interface. The microstructural changes at polymer/nanoparticle interface were studied by small-angle X-ray scattering coupled with tensile testing. The polymer/nanoparticle interface was fractal before deformation due to the existence of the bound polymers at the nanoparticle surface. Extensive de-bonding of particles and cavitation were observed when the nanocomposites were stretched beyond a critical strain. It was found that the composites based on carefully coated particles showed higher strain at cavitation than the composites based on uncoated particles. The composites based on C8-coated nanoparticles showed the largest decrease in electrical conductivity and the lowest temperature coefficient of the electrical conductivity among the composite samples studied.

  • Public defence: 2016-09-30 13:00 Sal C, Kista
    Le, Quang Tuan
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Material Physics, MF.
    Le, Quang Tuan
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Material Physics, MF.
    Magnetodynamics in Spin Valves and Magnetic Tunnel Junctions with Perpendicular and Tilted Anisotropies2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Spin-torque transfer (STT) effects have brought spintronics ever closer to practical electronic applications, such as MRAM and active broadband microwave spin-torque oscillator (STO), and have emerged as an increasingly attractive field of research in spin dynamics. Utilizing materials with perpendicular magnetic anisotropy (PMA) in such applications offers several great advantages such as low-current, low-field operation combined with high thermal stability. The exchange coupling that a PMA thin film exerts on an adjacent in-plane magnetic anisotropy (IMA) layer can tilt the IMA magnetization direction out of plane, thus creating a stack with an effective tilted magnetic anisotropy. The tilt angle can be engineered via both intrinsic material parameters, such as the PMA and the saturation magnetization, and extrinsic parameters, such as the layer thicknesses.

          STOs can be fabricated in one of a number of forms—as a nanocontact opening on a mesa from a deposited pseudospin-valve (PSV) structure, or as a nanopillar etching from magnetic tunneling junction (MTJ)—composed of highly reproducible PMA or predetermined tilted magnetic anisotropy layers.

          All-perpendicular CoFeB MTJ STOs showed high-frequency microwave generation with extremely high current tunability, all achieved at low applied biases. Spin-torque ferromagnetic resonance (ST-FMR) measurements and analysis revealed the bias dependence of spin-torque components, thus promise great potential for direct gate-voltage controlled STOs.

          In all-perpendicular PSV STOs, magnetic droplets were observed underneath the nanocontact area at a low drive current and low applied field. Furthermore, preliminary results for microwave auto-oscillation and droplet solitons were obtained from tilted-polarizer PSV STOs. These are promising and would be worth investigating in further studies of STT driven spin dynamics.

  • Public defence: 2016-10-03 10:00 Sal C, Kista
    Liu, Ying
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Liu, Ying
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Towards Elastic High-Performance Geo-Distributed Storage in the Cloud2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, we have presented techniques and algorithms to reduce request latency of distributed storage services that are deployed geographically. In addition, we have proposed and designed elasticity controllers to maintain predictable performance of distributed storage systems under dynamic workloads and platform uncertainties.

     Firstly, we have proposed a lease-based data consistency algorithm that allows a distributed storage system to serve read-dominant workload efficiently in a global scale. The leasing algorithm allows replicas with valid leases to serve read requests locally. As a result, most of the read requests are served with little latency. Then, we have investigated the efficiency of quorum-based data consistency algorithms when deployed globally. We have proposed MeteorShower framework, which is based on replicated logs and loosely synchronized clocks, to augment quorum-based data consistency algorithms. As a result, the quorum-based data consistency algorithms no longer need to query for updates from remote replicas, which significantly reduces request latency.  Based on similar insights, we build a transaction framework, Catenae, for geo-distributed data stores. It employs replicated logs to distribute transactions and aggregate the execution results. This allows Catenae to commit a serializable read-write transaction experiencing only a single inter-DC RTT delay in most of the cases.

    We examine and control the factors that cause performance degradation when scaling a distributed storage system. First, we have proposed BwMan, which is a model-based network bandwidth manager. It alleviates performance degradation caused by data migration activities. Then, we have systematically modeled the impact of data migrations.  Using this model, we have built an elasticity controller, namely, ProRenaTa, which combines proactive and reactive controls to achieve better control accuracy. ProRenaTa is able to calculate the best possible scaling plan to resize a distributed storage system under the constraint of achieving scaling deadlines, reducing latency SLO violations and minimizing VM provisioning cost. Consequently, ProRenaTa yields much higher resource utilization and less latency SLO violations comparing to state-of-the-art approaches. Based on ProRenaTa, we have built an elasticity controller named Hubbub-scale, which adopts a control model that generalizes the data migration overhead to the impact of performance interference caused by multi-tenancy in the Cloud.

  • Public defence: 2016-10-05 10:00 F3, Stockholm
    Zhetibaeva Elvung, Gulzat
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Zhetibaeva Elvung, Gulzat
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Employment in New Firms: Mobility and Labour Market Outcomes2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis studies the role of new firms in the labour market and uses Swedish data to analyze labour mobility in new firms, including both transitions of workers into and from new firms. In particular, it focuses on employees’ wages in new firms and post-new firm employment labour market outcomes as transitions into long-term employment and entrepreneurship. 

    This thesis consists of four essays. The first two essays concern labour mobility into new firms. The last two essays focus on post-new firm employment mobility.

    The first essay explores the role of new firms as an entry point into the labour market for individuals with little (or no) labour market experience. The findings show that the wage penalty found in previous research, which includes more heterogeneous groups of employees, decreases once the focus is solely on labour market entrants. 

    The second essay investigates whether there is a wage penalty for being employed at a new firm if the individual employee’s experience and status in the labour market are taken into account; this essay focuses on individuals who decide to switch jobs. The findings show that there is a wage penalty for being employed at a new firm; however, considering a random selection into new firms may underestimate the wage differentials.

    The third essay studies the role that new firms play for the career path of their employees. In particular, this paper analyzes whether short-term employment in new firms (employment lasting less than one year) may serve as a stepping stone toward long-term employment (at least two years of employment with the same employer) for non-employed individuals. The findings indicate that short-term employment in new firms may serve as a stepping stone toward long-term employment.

    The fourth paper examines the new firm effect on entrepreneurship, which the findings indicate is positive and statistically significant; this effect remains even after controlling for a worker's ability and shows that employees with both high and low levels of ability may transition to entrepreneurship.

  • Public defence: 2016-10-07 09:00 F3, Stockholm
    Nilsson, Johan O.
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering.
    Nilsson, Johan O.
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering.
    First-principles studies of kinetic effects in energy-related materials2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Quantum mechanical calculations based on first-principles (lat. ab initio) methods have over the past decades proved very successful for the study of many materials properties. Based solely on the fundamental constants of physics, the strength of these methods lies not only in describing existing materials, but also in predicting completely new ones. This thesis contains work both related to the quest for improved materials, and to the development of new methods.

    Equilibrium ab initio molecular dynamics methods are powerful for simulating diffusion in solids but are accompanied with high computational costs. This is related to the inherent slowness of the diffusion process in solids. To tackle this problem, we implement the color-diffusion algorithm into the Vienna ab initio simulation package to perform non-equilibrium ab initio molecular dynamics (NEMD) simulations. Ion diffusion in ceria doped with Gd and Sm is studied, and the calculated conductivities is found to agree well with experiment. However, although the NEMD method significantly lowers the computational cost, statistical quality in the calculated conductivity still comes expensive. Knowing the error resulting from limited statistics is therefore important.

    We derive an analytical expression for the error in calculated ion conductivity, which is verified numerically using the Kinetic Monte Carlo (KMC) method. Being developed particularly for the simulation of slow events, the great advantage of the KMC method over the NEMD method is that it is much less computationally expensive. This allows for long simulation times and large system sizes. The effect of dopant type and dopant distribution on the oxygen ion diffusivity is investigated with KMC simulations of rare-earth doped ceria. The full set of diffusion barriers in the simulation cell is calculated from first-principles within a density functional theory (DFT) framework.

    This Thesis also includes a study of processes involving water on a rutile TiO2(110) surface. The basic processes are: diffusion, dissociation, recombination, and clustering of water molecules. The barriers for these processes are calculated with DFT employing different exchange-correlation (XC) functionals. Using the barriers calculated from two XC functionals, we perform KMC simulations and find that the choice of XC functional radically alters the dynamics of the simulated water-titania system.

  • Public defence: 2016-10-07 10:00 Kollegiesalen, Stockholm
    He, Junjing
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering.
    He, Junjing
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering.
    High temperature performance of materials for future power plants2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Increasing energy demand leads to two crucial problems for the whole society. One is the economic cost and the other is the pollution of the environment, especially CO2 emissions. Despite efforts to adopt renewable energy sources, fossil fuels will continue to dominate. The temperature and stress are planned to be raised to 700 °C and 35 MPa respectively in the advanced ultra-supercritical (AUSC) power plants to improve the operating efficiency. However, the life of the components is limited by the properties of the materials. The aim of this thesis is to investigate the high temperature properties of materials used for future power plants.

    This thesis contains two parts. The first part is about developing creep rupture models for austenitic stainless steels. Grain boundary sliding (GBS) models have been proposed that can predict experimental results. Creep cavities are assumed to be generated at intersection of subboundaries with subboundary corners or particles on a sliding grain boundary, the so called double ledge model. For the first time a quantitative prediction of cavity nucleation for different types of commercial austenitic stainless steels has been made. For growth of creep cavities a new model for the interaction between the shape change of cavities and creep deformation has been proposed. In this constrained growth model, the affected zone around the cavities has been calculated with the help of FEM simulation. The new growth model can reproduce experimental cavity growth behavior quantitatively for different kinds of austenitic stainless steels. Based on the cavity nucleation models and the new growth models, the brittle creep rupture of austenitic stainless steels has been determined. By combing the brittle creep rupture with the ductile creep rupture models, the creep rupture strength of austenitic stainless steels has been predicted quantitatively. The accuracy of the creep rupture prediction can be improved significantly with combination of the two models.

    The second part of the thesis is on the fatigue properties of austenitic stainless steels and nickel based superalloys. Firstly, creep, low cycle fatigue (LCF) and creep-fatigue tests have been conducted for a modified HR3C (25Cr20NiNbN) austenitic stainless steel. The modified HR3C shows good LCF properties, but lower creep and creep-fatigue properties which may due to the low ductility of the material. Secondly, LCF properties of a nickel based superalloy Haynes 282 have been studied. Tests have been performed for a large ingot. The LCF properties of the core and rim positions did not show evident differences. Better LCF properties were observed when compared with two other low γ’ volume fraction nickel based superalloys. Metallography study results demonstrated that the failure mode of the material was transgranular. Both the initiation and growth of the fatigue cracks were transgranular.

  • Public defence: 2016-10-07 10:00 Sal D3, Stockholm
    Hassanpoor, Arman
    KTH, School of Electrical Engineering (EES), Electric power and energy systems. KTH Royal Institute of Technology.
    Hassanpoor, Arman
    KTH, School of Electrical Engineering (EES), Electric power and energy systems. KTH Royal Institute of Technology.
    Modulation of Modular Multilevel Converters for HVDC Transmission2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The outstanding features of modular multilevel converters (MMC) have recently gained much attention in the high-voltage direct-current (HVDC) transmission field. Power quality, converter cost and system performance are three crucial aspects of HVDC MMCs which are directly linked to the converter modulation and switching schemes. High power quality and performance require high switching frequency and large cell capacitor whereas low switching frequency and small cell capacitor are needed to reduce the converter cost.

    The main objective of this thesis is to propose a practical switching method for HVDC MMCs which balances the aforementioned contradictory requirements. A mathematical analysis of the converter switching pattern, against the power quality and converter cost, has been conducted to formulate an optimization problem for MMCs. Different objective functions are studied for the formulated optimization problem such as converter loss minimization, voltage imbalance minimization and computational burden minimization. This thesis proposes three methods to address different objective functions. Ultimately, a real-time simulator has been built to practically verify and investigate the performance of the proposed methods in a realistic point-to-point HVDC link.

    The most significant outcome of this thesis is the tolerance band-based switching scheme which offers a direct control of the cell capacitor voltage, low power losses, and robust dynamic performance. As a result, the converter switching frequency can reach frequencies as low as 70 Hz (with the proposed cell tolerance band (CTB) method). A modified optimized CTB method is proposed to minimize the converter switching losses and it could reduce the converter switching losses by 60% in comparison to the conventional phase shifted carrier modulation method.

    It is concluded intelligent utilization of sorting algorithm can enable efficient HVDC station operation by reducing the converter cost.

  • Public defence: 2016-10-07 13:00 F3, Stockholm
    Åhman, Henrik
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Åhman, Henrik
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Interaction as existential practice: An explorative study of Mark C. Taylor’s philosophical project and its potential consequences for Human-Computer Interaction2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis discusses the potential consequences of applying the philosophy of Mark C. Taylor to the field of Human-Computer Interaction (HCI).

    The first part of the thesis comprises a study focusing on two discursive trends in contemporary HCI, materiality and the self, and how these discourses describe interaction. Through a qualitative, inductive content analysis of 171 HCI research articles, a number of themes are identified in the literature and, it is argued, construct a dominant perspective of materiality, the self, and interaction. Examples that differ from the dominant discourse are also discussed as alternative perspectives for each of the three focal areas.

    The second part of the thesis comprises an analysis of Mark C. Taylor’s philosophical project which enables a number of philosophical positions on materiality, the self, and interaction to be identified. These positions are suggested to be variations and rereadings of themes found in Friedrich Nietzsche’s philosophy. These variations emerge as Taylor approaches Nietzsche through poststructuralism and complexity theory, and it is argued that the apparent heterogeneity of Taylor’s project can be understood as a more coherent position when interpreted in relation to Nietzsche’s philosophy.

    Based on the findings of the two literature studies, the thesis then discusses the possible consequences for HCI, if Taylor’s philosophy were to be applied as a theoretical framework. The thesis argues that Taylor’s philosophy describes the interaction between humans and computers  as an existential process, which contrasts with the dominant HCI discourse; that this view can be related to and provide a theoretical foundation for the alternative discourses in HCI; and that it can contribute to developing HCI.

  • Public defence: 2016-10-10 12:00 Auditorium of Rey Francisco, 4 (Sala de Conferencias), Madrid
    Fitiwi, Desta Zahlay
    KTH, School of Electrical Engineering (EES).
    Fitiwi, Desta Zahlay
    KTH, School of Electrical Engineering (EES).
    Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Driven by a number of factors, the electric power industry is expected to undergo a paradigm shift with a considerably increased level of variable energy sources. A significant integration of such sources requires heavy transmission investments over geographically wide and large-scale networks. However, the stochastic nature of such sources, along with the sheer size of network systems, results in problems that may become intractable. Thus, the challenge addressed in this work is to design efficient and reasonably accurate models, strategies and tools that can solve large-scale TEP problems under uncertainty. A long-term stochastic network planning tool is developed, considering a multi-stage decision framework and a high level integration of renewables. Such a tool combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. Furthermore, in order to significantly reduce the combinatorial solution search space, a specific heuristic solution strategy is devised. This works by decomposing the original problem into successive optimization phases.One of the modeling challenges addressed in this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. Another relevant contribution is a domain-driven clustering process of snapshots which is based on a “moments” technique. Finally, the developed models, methods and solution strategies have been tested on standard and real-life systems. This thesis also presents numerical results of an aggregated 1060-node European network system considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions.

  • Public defence: 2016-10-10 13:00 Sal C, Kista
    Kalavri, Vasiliki
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Kalavri, Vasiliki
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Performance Optimization Techniques and Tools for Distributed Graph Processing2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, we propose optimization techniques for distributed graph processing. First, we describe a data processing pipeline that leverages an iterative graph algorithm for automatic classification of web trackers. Using this application as a motivating example, we examine how asymmetrical convergence of iterative graph algorithms can be used to reduce the amount of computation and communication in large-scale graph analysis. We propose an optimization framework for fixpoint algorithms and a declarative API for writing fixpoint applications. Our framework uses a cost model to automatically exploit asymmetrical convergence and evaluate execution strategies during runtime. We show that our cost model achieves speedup of up to 1.7x and communication savings of up to 54%. Next, we propose to use the concepts of semi-metricity and the metric backbone to reduce the amount of data that needs to be processed in large-scale graph analysis. We provide a distributed algorithm for computing the metric backbone using the vertex-centric programming model. Using the backbone, we can reduce graph sizes up to 88% and achieve speedup of up to 6.7x.

  • Public defence: 2016-10-10 14:00 F3, Stockholm
    Schwarz, Oliver
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Schwarz, Oliver
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    No Hypervisor Is an Island: System-wide Isolation Guarantees for Low Level Code2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The times when malware was mostly written by curious teenagers are long gone. Nowadays, threats come from criminals, competitors, and government agencies. Some of them are very skilled and very targeted in their attacks. At the same time, our devices – for instance mobile phones and TVs – have become more complex, connected, and open for the execution of third-party software. Operating systems should separate untrusted software from confidential data and critical services. But their vulnerabilities often allow malware to break the separation and isolation they are designed to provide. To strengthen protection of select assets, security research has started to create complementary machinery such as security hypervisors and separation kernels, whose sole task is separation and isolation. The reduced size of these solutions allows for thorough inspection, both manual and automated. In some cases, formal methods are applied to create mathematical proofs on the security of these systems.

    The actual isolation solutions themselves are carefully analyzed and included software is often even verified on binary level. The role of other software and hardware for the overall system security has received less attention so far. The subject of this thesis is to shed light on these aspects, mainly on (i) unprivileged third-party code and its ability to influence security, (ii) peripheral devices with direct access to memory, and (iii) boot code and how we can selectively enable and disable isolation services without compromising security.

    The papers included in this thesis are both design and verification oriented, however, with an emphasis on the analysis of instruction set architectures. With the help of a theorem prover, we implemented various types of machinery for the automated information flow analysis of several processor architectures. The analysis is guaranteed to be both sound and accurate.

  • Public defence: 2016-10-11 13:00 Sal/Hall B, Kista
    Paul, Ruma
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. Université catholique de Louvain, Belgium.
    Paul, Ruma
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. Université catholique de Louvain, Belgium.
    Building Distributed Systems for High-Stress Environments using Reversibility and Phase-Awareness2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Large-scale applications for mobile devices and Internet of Things live in stressful real-world environments: they have both continuous faults and bursts of high faults. Typical faults are node crashes, network partitions, and communication delays. In this thesis, we propose a principled approach to build applications that survive in such environments by using the concepts of Reversibility and Phase. A system is Reversible if the set of operations it provides depends on its current stress,  and not on the history of the stress. By stress we imply all the potential perturbing effects of the environment on the system, which includes both faults and other nonfunctional properties such as communication delay and bandwidth. Reversibility generalizes standard fault tolerance with nested fault models. When the stress causes the fault rate to go outside one model then it is still inside the scope of the next model. As stress is a global condition that cannot easily be measured by individual nodes, we propose the concept of Phase in order to approximate the set of available operations of the system at each node. Phase is a per-node property, and can be determined with no additional distributed computation.  We present two case studies.  First, we present a transactional key-value store built on a structured overlay network and we explain how to make it Reversible.  Second, we present a distributed collaborative graphic editor built on top of the key-value store, and we explain how to make it Phase-Aware, i.e., it optimizes its behavior according to a real-time observation of phase at each node using a Phase API. This shows the usefulness of Reversibility and Phase-Awareness for building large-scale Internet applications.

  • Public defence: 2016-10-13 15:05 T2, Huddinge
    Widman, Erik
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Widman, Erik
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Ultrasonic Methods for Quantitative Carotid Plaque Characterization2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Cardiovascular diseases are the leading causes of death worldwide and improved diagnostic methods are needed for early intervention and to select the most suitable treatment for patients. Currently, carotid artery plaque vulnerability is typically determined by visually assessing ultrasound B-mode images, which is influenced by user-subjectivity. Since plaque vulnerability is correlated to the mechanical properties of the plaque, quantitative techniques are needed to estimate plaque stiffness as a surrogate for plaque vulnerability, which would reduce subjectivity during plaque assessment. The work in this thesis focused on three noninvasive ultrasound-based techniques to quantitatively assess plaque vulnerability and measure arterial stiffness. In Study I, a speckle tracking algorithm was validated in vitro to assess strain in common carotid artery (CCA) phantom plaques and thereafter applied in vivo to carotid atherosclerotic plaques where the strain results were compared to visual assessments by experienced physicians. In Study II, hard and soft CCA phantom plaques were characterized with shear wave elastography (SWE) by using phase and group velocity analysis while being hydrostatically pressurized followed by validating the results with mechanical tensile testing. In Study III, feasibility of assessing the stiffness of simulated plaques and the arterial wall with SWE was demonstrated in an ex vivo setup in small porcine aortas used as a human CCA model. In Study IV, SWE and pulse wave imaging (PWI) were compared when characterizing homogeneous CCA soft phantom plaques. The techniques developed in this thesis have demonstrated potential to characterize carotid artery plaques. The results show that the techniques have the ability to noninvasively evaluate the mechanical properties of carotid artery plaques, provide additional data when visually assessing B-mode images, and potentially provide improved diagnoses for patients suffering from cerebrovascular diseases.

  • Public defence: 2016-10-14 09:53 Kollegiesalen, Stockholm
    Colmenares, Juan
    KTH, School of Electrical Engineering (EES), Electric power and energy systems.
    Colmenares, Juan
    KTH, School of Electrical Engineering (EES), Electric power and energy systems.
    Extreme Implementations of Wide-Bandgap Semiconductors in Power Electronics2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Wide-bandgap (WBG) semiconductor materials such as silicon carbide (SiC) and gallium-nitride (GaN) allow higher voltage ratings, lower on-state voltage drops, higher switching frequencies, and higher maximum temperatures. All these advantages make them an attractive choice when high-power density and high-efficiency converters are targeted. Two different gate-driver designs for SiC power devices are presented. First, a dual-function gate-driver for a power module populated with SiC junction field-effect transistors that finds a trade-off between fast switching speeds and a low oscillative performance has been presented and experimentally verified. Second, a gate-driver for SiC metal-oxide semiconductor field-effect transistors with a short-circuit protection scheme that is able to protect the converter against short-circuit conditions without compromising the switching performance during normal operation is presented and experimentally validated. The benefits and issues of using parallel-connection as the design strategy for high-efficiency and high-power converters have been presented. In order to evaluate parallel connection, a 312 kVA three-phase SiC inverter with an efficiency of 99.3 % has been designed, built, and experimentally verified. If parallel connection is chosen as design direction, an undesired trade-off between reliability and efficiency is introduced. A reliability analysis has been performed, which has shown that the gate-source voltage stress determines the reliability of the entire system. Decreasing the positive gate-source voltage could increase the reliability without significantly affecting the efficiency. If high-temperature applications are considered, relatively little attention has been paid to passive components for harsh environments. This thesis also addresses high-temperature operation. The high-temperature performance of two different designs of inductors have been tested up to 600_C. Finally, a GaN power field-effect transistor was characterized down to cryogenic temperatures. An 85 % reduction of the on-state resistance was measured at −195_C. Finally, an experimental evaluation of a 1 kW singlephase inverter at low temperatures was performed. A 33 % reduction in losses compared to room temperature was achieved at rated power.

  • Public defence: 2016-10-14 11:00 K1, Stockholm
    Vardanyan, Yelena
    KTH, School of Electrical Engineering (EES), Electric power and energy systems.
    Vardanyan, Yelena
    KTH, School of Electrical Engineering (EES), Electric power and energy systems.
    Optimal bidding of a hydropower producer insequential power markets with riskassessment: Stochastic programming approach2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Short-term hydropower planning and bidding under uncertainty is a complicated task. The problem became more challenging with the liberalized market environment within the last two decades. Apart from this new reform taking place in the electricity market, the electricity market participants including hydropower producers experienced the second change in the form of intermittent wind power integration into power systems. Thus, previous decision support tools are not capable of fulfilling market participants’ expectations in the new competitive and highly uncertain environment. Intermittent power sources, namely wind power, increase the imbalances in the power system, which in turn increases the need of the regulating power sources. Being a flexible energy source, hydropower can provide regulating power. For this purpose, new hydropower planning and bidding models must be developed, capable of addressing uncertainties and the dynamics existing within market places.

    In this dissertation, a set of new short-term hydropower planning and bidding models are developed for sequential electricity markets under price uncertainties. Developed stochastic coordinated hydropower planning and bidding tools can be classified into two classes, as models with exogenousand endogenous prices.

    In the first class, developed coordinated bidding tools address the price uncertainties using scenario trees, which are built based on the distribution function of the unknown variables. Thus, the proposed coordinated bidding and planning tools consider all possible future prices and market outcomes together with the likelihood of these market outcomes. To reflect the continuously clearing nature of intra-day and real-time markets rolling planning is applied. In addition, models apply risk measures as another way to hedge against uncertain prices.

    In the second class, hydropower stochastic strategic bidding models are developed using stochastic bi-level optimization methodology. Here market prices are calculated internally as dual variables of the load balance constraints in the lower level ED problems. To be able to solve the stochastic bilevel optimization problem, KKT optimality conditions are applied. By this transformation the problem is converted to a single-level stochastic program, which is simplified further using a corresponding discretization technique.

  • Public defence: 2016-10-14 13:00 FD5, STOCKHOLM
    Zhou, Tunhe
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Zhou, Tunhe
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Laboratory X-Ray Phase-Contrast Imaging: Methods and Comparisons2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    X-ray phase-contrast imaging has seen rapid development in recent decades due to its superior performance in imaging low-absorption objects, compared to traditional attenuation x-ray imaging. Having higher demand on coherence, x-ray phase-contrast imaging is performed mostly at synchrotrons. With the development of different imaging techniques, and the development of laboratory sources and x-ray optics, x-ray phase-contrast imaging can now be implemented on laboratory systems, which is promising and practical for broader range of applications.

    The subject of this thesis is the implementation, development and comparison of different laboratory phase-contrast methods using a liquid-metal-jet source. The three x-ray phase-contrast imaging methods included in this thesis are the propagation-, grating-, and speckle-based techniques. The grating-based method has been implemented on a laboratory system with a liquid-metal-jet source, which yields several times higher brightness than a standard solid-anode microfocus source. This allows shorter exposure time or a higher signal-to-noise ratio. The performance of the grating-based method has been experimentally and numerically compared with the propagation-based method, and the dose required to observe an object as a function of the object’s diameter has been investigated with simulations. The result indicates a lower dose requirement for the propagation-based method in this system but a potential advantage for the grating-based method to detect relatively large samples using a monochromatic beam.

    The speckle-based method, both the speckle-tracking and speckle-scanning techniques, has been implemented on a laboratory system for the first time, showing its adaptability to radiation of low temporal coherence. Tomography has been performed and shows the potential applications of this method on quantitative analysis on both absorption and phase information of materials. As a basis for further optimization and comparisons to other methods, the noise properties of the differential phase contrast of the speckle-based method have been studied and an analytical expression for the noise variance introduced, showing a similarity to the grating-based method.

  • Public defence: 2016-10-14 13:00 T 1 (Emmy Rappestad), Huddinge
    Lagerstedt, Marianne
    KTH, School of Technology and Health (STH).
    Lagerstedt, Marianne
    KTH, School of Technology and Health (STH).
    Mot nätverkssjukvård i komplex miljö: - behov av en vetenskaplig syn på ledning för säker vård och effektiv resursanvändning2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Since 2008 advanced home healthcare agencies (ASiH) in a larger Swedish county council has underwent a transformation, to become part of a coming concept: networked healthcare (NVS). NVS means that intermediate multi-organizational healthcare (IMV) will be produced often in the home, and from 2013 to an increasing number of patients in different age groups with different diagnoses and medical conditions - in large variability of needs. At the same time IMV has proved to be not simply practical to implement in a resource-efficient and patientsafe way. Based on theories from Command and Control Science the safetyproblem that arise in connection with IMV is a sign of the less known increasing need of the direction and coordination support that IMV requires.

    With a casestudy based research approach with interactive elements, different qualitative methods has been used in two phases between 2008 - 2013. The first phase is characterized by a phenomenological approach, while the second phase has a critical hermeneutic approach. Research methods includes fieldvisits with informal discussions, in-depth interviews, validation with respondents and two different methodologies for textanalysis.

    The main result shows that practical aggravating circumstances for safe care consists of lesser known and from 2013 increasing problems with direction and coordination, through expanded advanced IMV in the home as a part of NVS concept. This also as a result of inadequate and inappropriate direction and coordination support for IMV.

    The thesis concludes that the NVS represents a resource intensive health care concept, which requires a new view on the management issue and a network-related methodology for direction and coordination. This is to promote ethical, equitable, patientsafe and dignified advanced IMV so an optimized use of resources can be implemented, through shared responsibility and coordination in patientuniquely designed networkconstellations as a given work model.

  • Public defence: 2016-10-14 14:00 Kollegiesalen, Stockholm
    Karlsson, Caroline
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Land and Water Resources Engineering.
    Karlsson, Caroline
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Land and Water Resources Engineering.
    Geo-environmental considerations in transport infrastructure planning2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Transport infrastructure constitutes one of the key factors to a country’s economic growth. Investment in new transport infrastructure might cause potential environmental impacts, and if a project has several alternative corridors open for suggestion then each alternative corridor will have a different impact on the environment. The European Commission has stated that the natural resources are important to the quality of life. Therefore, the efficient use of resources will be a key towards meeting future climate change and reduction in greenhouse gas (GHG) emissions. This implies that in an evergrowing global society the resource efficiency as well as the choice of transport infrastructure corridor becomes even more important to consider. The aim of this research project was to contribute to early transport infrastructure planning by the development of methods for and implementation of easy understandable geological criteria and models for decision support. Moreover, the intention was to assess how geological information can be developed and extracted from existing spatial data and coupled with other areas of interest, such as ecology and life cycle assessment. It has previously been established that geological information plays an important role in transport infrastructure planning, as the geological characteristics of the proposed area as well as the possibilities of material use influences the project. Therefore, in order to couple geological information for early transport infrastructure planning, four studies (Paper I-IV) were undertaken where methods were developed and tested for the inclusion of geological information. The first study (Paper I) demonstate how optional road corridors could be evaluated using geological information of soil thickness, soil type and rock outcrops, bedrock quality and slope in combination with ecological information. The second study (Paper II) shows how geological information of soil thickness and stratigraphy can be combined with life cycle assessments (LCA) to assess the corresponding greenhouse gas emission and energy use for the proposed road corridors. The difficulty of using expert knowledge for susceptibility assessment of natural hazards, i.e. flooding, landslide and debris flow, for early transport infrastructure planning was presented in the third study (Paper III). In this study the expert knowledge was used in a multi-criteria analysis where the analytic hierarchy process (AHP) was chosen as a decision rule. This decision rule was compared to the decision rule weighted linear combination (WLC) using two different schemes of weighting. In all the mentioned studies the importance of soil thickness information was highlighted. Therefore, the fourth and final study (Paper IV) presented a new methodology for modelling the soil thickness in areas where data is sparse. A simplified regolith model (SRM) was developed in order to estimate the regolith thickness, i.e. soil thickness, for previously glaciate terrain with a high frequency of rock outcrops. SRM was based on a digital elevation model (DEM) and an optimized search algorithm. The methods developed in order to couple geological information with other areas of interest is a tentative step towards an earlier geo-environmental planning process. However, the methods need to be tested in other areas with different geological conditions. The combination of geological information in GIS with MCA enabled the integration of knowledge for decision making; it also allowed influencing the importance between various aspects of geological information as well as the importance between geological information and other fields of interest, such as ecology, through the selected weighting schemes. The results showed that synergies exist between ecology and geology, where important geological considerations could also have positive effects on ecological consideration. Soil thickness was very important for GHG emission and energy whereas stratigraphical knowledge had a minor influence. When using expert knowledge the consistency in the expert judgements also needs to be considered. It was shown that experts tended to be inconsistent in their judgements, and that some consistency could be reached if the judgements were aggregated instead of used separately. The results also showed that the developed SRM had relatively accurate results for data sparse areas, and that this model could be used in several projects where the knowledge of soil thickness is important but lacking. It was concluded that geological information should be considered. By using GIS and MCA it is possible to evaluate different aspects of geological information in order to improve decision making.

  • Public defence: 2016-10-17 13:00 F3, Stockholm
    Poulikidou, Sofia
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Environmental Strategies Research (fms).
    Poulikidou, Sofia
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Environmental Strategies Research (fms).
    Assessing design strategies for improved life cycle environmental performance of vehicles2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Vehicle manufactures have adopted different strategies for improving the environmental performance of their fleet including lightweight design and alternative drivetrains such as EVs. Both strategies reduce energy during use but may result in a relative increase of the impact during other stages. To address this, a lifecycle approach is needed when vehicle design strategies are developed. The thesis explores the extent that such a lifecycle approach is adopted today and assesses the potential of these strategies to reduce the lifecycle impact of vehicles. Moreover it aims to contribute to method development for lifecycle considerations during product development and material selection.

    Current practices were explored in an empirical study with four vehicle manufacturers. The availability of tools for identifying, monitoring and assessing design strategies was explored in a literature review. The results of the empirical study showed that environmental considerations during product development often lack a lifecycle perspective. Regarding the use of tools a limited number of such tools were utilized systematically by the studied companies despite the numerous tools available in literature.

    The influence of new design strategies on the lifecycle environmental performance of vehicles was assessed in three case studies; two looking into lightweight design and one at EVs. Both strategies resulted in energy and GHG emissions savings though the impact during manufacturing increases due to the advanced materials used. Assumptions relating to the operating conditions of the vehicle e.g. lifetime distance or for EVs the carbon intensity of the energy mix, influence the level of this tradeoff. Despite its low share in terms of environmental impact EOL is important in the overall performance of vehicles.

    The thesis contributed to method development by suggesting a systematic approach for material selection. The approach combines material and environmental analysis tools thus increases the possibilities for lifecycle improvements while minimizing risk for sub-optimizations.

  • Public defence: 2016-10-20 10:00 F3, Stockholm
    Swarén, Mikael
    KTH, School of Engineering Sciences (SCI), Mechanics.
    Swarén, Mikael
    KTH, School of Engineering Sciences (SCI), Mechanics.
    Objective Analysis Methods in the Mechanics of Sports2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Sports engineering can be considered as the bridge between the knowledge of sports science and the principles of engineering and has an important role not only in improving the athletic performance, but also in increasing the safety of the athletes. Testing and optimization of sports equipment and athletic performance are essential for supporting athletes in their quest to reach the podium. However, most of the equipment used by world-class athletes is chosen based only on subjective tests and the athletes’ feelings. Consequently, one of the aims of this thesis was to combine mechanics and mathematics to develop new objective test methods for sports equipment. Another objective was to investigate the possibility to accurately track and analyse cross-country skiing performance by using a real-time locating system. A long term aim is the contribution to increased knowledge about objective test and analysis methods in sports. The main methodological advancements are the modification of established test methods for sports equipment and the implementation of spline-interpolated measured positioning data to evaluate cross-country skiing performance. The first two papers show that it is possible to design objective yet sport specific test methods for different sports equipment. New test devices and methodologies are proposed for alpine ski helmets and cross-country ski poles. The third paper gives suggestions for improved test setups and theoretical simulations are introduced for glide tests of skis. It is shown, it the fourth paper, that data from a real-time locating system in combination with a spline model offers considerable potential for performance analysis in cross-country sprint skiing. In the last paper, for the first time, propulsive power during a cross-country sprint skiing race is estimated by applying a power balance model to spline-interpolated measured positioning data, enabling in-depth analyses of power output and pacing strategies in cross-country skiing. Even though it has not been a first priority aim in this work, the results from the first two papers have been used by manufacturers to design new helmets with increased safety properties and cross-country ski poles with increased force transfer properties. In summary, the results of this thesis demonstrate the feasibility of using mechanics and mathematics to increase the objectiveness and relevance when analysing sports equipment and athletic performance.

  • Public defence: 2016-10-21 09:30 Kollegiesalen, Stockholm
    Wallhagen, Marita
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Environmental Strategies Research (fms). University of Gävle.
    Wallhagen, Marita
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Environmental Strategies Research (fms). University of Gävle.
    Environmental Assessment Tools for Neighbourhoods and Buildings in relation to Environment, Architecture, and Architects2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis explores Neighbourhood and Building Environmental Assessment Tools’ (NBEATs’) function as assessment tools and decision support, and their relation to environment, architecture and architects. This is done by analysing, testing, and discussing a number of NBEATs (LEED-NC, Code for Sustainable Homes, EcoEffect, LEED-ND, BREEAM-C, and ENSLIC-tool), their manuals and use. Moreover, professionals’ (architects’) self-rated opinions regarding use and knowledge of NBEATs and environmental aspects are surveyed.

    Similarities and differences in NBEATs are found regarding: content, structure, weighting and indicators used. Indicators distinguished as procedure, performance and feature are used to varying extents to assess social, environmental and technical aspects. NBEATs relation to environmental sustainability has limitations due to: non-transparency, tradable indicators, relative measures, low criteria levels, limited life cycle perspective, and exclusion of relevant environmental aspects, such as embedded toxic substances, nutrient cycles, land use change, and ecosystem services. Ratings and architecture are influenced by NBEATs in varying ways. Higher criteria levels would probably increase their impact on architecture. Thus more research regarding NBEATs and links to architectural design, theory and practice is welcomed.

    There is limited use of NBEATs as decision support in early design phases such as in architectural competitions. Architects rate the importance of environmental aspects high, but few rate their skill in handling environmental aspects high. This calls for increasing knowledge and know-how of environmental strategies and solutions among architects and adaptation of NBEATs to early design processes. The values NBEATs reflect and the values we want them to create is also important. To support ‘environmental’ architecture, an increased socio-eco-technological system perspective is put forward, and other measures besides NBEATs are needed.

  • Public defence: 2016-10-21 10:00 F3, Stockholm
    Farooqui, Maaz
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Farooqui, Maaz
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Innovative noise control in ducts2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The objective of this doctoral thesis is to study three different innovative noise control techniques in ducts namely: acoustic metamaterials, porous absorbers and microperforates. There has been a lot of research done on all these three topics in the context of duct acoustics. This research will assess the potential of the acoustic metamaterial technique and compare to the use of conventional methods using microperforated plates and/or porous materials. 

    The objective of the metamaterials part is to develop a physical approach to model and synthesize bulk moduli and densities to feasibly control the wave propagation pattern, creating quiet zones in the targeted fluid domain. This is achieved using an array of locally resonant metallic patches. In addition to this, a novel thin slow sound material is also proposed in the acoustic metamaterial part of this thesis. This slow sound material is a quasi-labyrinthine structure flush mounted to a duct, comprising of coplanar quarter wavelength resonators that aims to slow the speed of sound at selective resonance frequencies. A good agreement between theoretical analysis and experimental measurements is demonstrated.

    The second technique is based on acoustic porous foam and it is about modeling and characterization of a novel porous metallic foam absorber inside ducts. This material proved to be a similar or better sound absorber compared to the conventional porous absorbers, but with robust and less degradable properties. Material characterization of this porous absorber from a simple transfer matrix measurement is proposed.The last part of this research is focused on impedance of perforates with grazing flow on both sides. Modeling of the double sided grazing flow impedance is done using a modified version of an inverse semi-analytical technique. A minimization scheme is used to find the liner impedance value in the complex plane to match the calculated sound field to the measured one at the microphone positions.

  • Public defence: 2016-10-21 10:00 Gard-aulan, Solna
    Periyannan Rajeswari, Prem Kumar
    KTH, School of Biotechnology (BIO), Proteomics and Nanobiotechnology.
    Periyannan Rajeswari, Prem Kumar
    KTH, School of Biotechnology (BIO), Proteomics and Nanobiotechnology.
    Droplet microfluidics for single cell and nucleic acid analysis2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Droplet microfluidics is an emerging technology for analysis of single cells and biomolecules at high throughput. The controlled encapsulation of particles along with the surrounding microenvironment in discrete droplets, which acts as miniaturized reaction vessels, allows millions of particles to be screened in parallel. By utilizing the unit operations developed to generate, manipulate and analyze droplets, this technology platform has been used to miniaturize a wide range of complex biological assays including, but not limited to, directed evolution, rare cell detection, single cell transcriptomics, rare mutation detection and drug screening.

    The aim of this thesis is to develop droplet microfluidics based methods for analysis of single cells and nucleic acids. In Paper I, a method for time-series analysis of mammalian cells, using automated fluorescence microscopy and image analysis technique is presented. The cell-containing droplets were trapped on-chip and imaged continuously to assess the viability of hundreds of isolated individual cells over time. This method can be used for studying the dynamic behavior of cells. In Paper II, the influence of droplet size on cell division and viability of mammalian cell factories during cultivation in droplets is presented. The ability to achieve continuous cell division in droplets will enable development of mammalian cell factory screening assays in droplets. In Paper III, a workflow for detecting the outcome of droplet PCR assay using fluorescently color-coded beads is presented. This workflow was used to detect the presence of DNA biomarkers associated with poultry pathogens in a sample. The use of color-coded detection beads will help to improve the scalability of the detection panel, to detect multiple targets in a sample. In Paper IV, a novel unit operation for label-free enrichment of particles in droplets using acoustophoresis is presented. This technique will be useful for developing droplet-based assays that require label-free enrichment of cells/particles and removal of droplet content. In general, droplet microfluidics has proven to be a versatile tool for biological analysis. In the years to come, droplet microfluidics could potentially be used to improve clinical diagnostics and bio-based production processes.

  • Public defence: 2016-10-24 12:00 Sal C, Electrum, Kista
    Rameshan, Navaneeth
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Rameshan, Navaneeth
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    On the Role of Performance Interference in Consolidated Environments2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    With the advent of resource shared environments such as the Cloud, virtualization has become the de facto standard for server consolidation. While consolidation improves utilization, it causes performance-interference between Virtual Machines (VMs) from contention in shared resources such as CPU, Last Level Cache (LLC) and memory bandwidth. Over-provisioning resources for performance sensitive applications can guarantee Quality of Service (QoS), however, it results in low machine utilization. Thus, assuring QoS for performance sensitive applications while allowing co-location has been a challenging problem. In this thesis, we identify ways to mitigate performance interference without undue over-provisioning and also point out the need to model and account for performance interference to improve the reliability and accuracy of elastic scaling. The end goal of this research is to leverage on the observations to provide e cient resource management that is both performance and cost aware. Our main contributions are threefold; first, we improve the overall machine utilization by executing best-effort applications along side latency critical applications without violating its performance requirements. Our solution is able to dynamically adapt and leverage on the changing workload/phase behaviour to execute best-effort applications without causing excessive interference on performance; second, we identify that certain performance metrics used for elastic scaling decisions may become unreliable if performance interference is unaccounted. By modelling performance interference, we show that these performance metrics become reliable in a multi-tenant environment; and third, we identify and demonstrate the impact of interference on the accuracy of elastic scaling and propose a solution to significantly minimise performance violations at a reduced cost. 

  • Public defence: 2016-10-25 10:00 F3, Stockholm
    Spross, Johan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Spross, Johan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Toward a reliability framework for the observational method2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Constructing sustainable structures in rock that satisfy all predefined technical specifications requires rational and effective construction methods. When the geotechnical behaviour is hard to predict, the Euro­pean design code, Eurocode 7, suggests application of the observational method to verify that the performance is acceptable. The basic principle of the method is to accept predefined changes in the design during con­struction to comply with the actual ground conditions, if the current de­sign is found unsuitable. Even though this in theory should ensure an effective design solution, formal application of the observational method is rare.

    Investigating the applicability of the observational method in rock en­gineering, the aim of this thesis is to identify, highlight, and solve the aspects of the method that limit its wider application. Furthermore, the thesis aims to improve the conceptual understanding of how design deci­sions should be made when large uncertainties are present.

    The main research contribution is a probabilistic framework for the observational method. The suggested methodology allows comparison of the merits of the observational method with that of conventional design. Among other things, the thesis also discusses (1) the apparent contradiction between the preference for advanced probabilistic calculation methods and sound, qualitative engineering judgement, (2) how the establishment of limit states and alarm limits must be carefully considered to ensure structural safety, and (3) the applicability of the Eurocode defini­tion of the observational method and the implications of deviations from its principles.

  • Public defence: 2016-10-28 09:15 D2, Stockholm
    Tholerus, Emmi
    KTH, School of Electrical Engineering (EES), Fusion Plasma Physics.
    Tholerus, Emmi
    KTH, School of Electrical Engineering (EES), Fusion Plasma Physics.
    The dynamics of Alfvén eigenmodes excited by energetic ions in toroidal plasmas2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The future fusion power plants that are based on magnetic confinement will deal with plasmas that inevitably contain energetic (non-thermal) particles. These particles come, for instance, from fusion reactions or from external heating of the plasma. Ensembles of energetic ions can excite eigenmodes in the Alfvén frequency range to such an extent that the resulting wave fields redistribute the energetic ions, and potentially eject them from the plasma. The redistribution of ions may cause a substantial reduction of heating efficiency. Understanding the dynamics of such instabilities is necessary to optimise the operation of fusion experiments and of future fusion power plants.

    Two models have been developed to simulate the interaction between energetic ions and Alfvén eigenmodes. One is a bump-on-tail model, of which two versions have been developed: one fully nonlinear and one quasilinear. The quasilinear version has a lower dimensionality of particle phase space than the nonlinear one. Unlike previous similar studies, the bump-on-tail model contains a decorrelation of the wave-particle phase in order to model stochasticity of the system. When the characteristic time scale for macroscopic phase decorrelation is similar to or shorter than the time scale of nonlinear wave-particle dynamics, the nonlinear and the quasilinear descriptions quantitatively agree. A finite phase decorrelation changes the growth rate and the saturation amplitude of the wave mode in systems with an inverted energy distribution around the wave-particle resonance. Analytical expressions for the correction of the growth rate and the saturation amplitude have been derived, which agree well with numerical simulations. A relatively weak phase decorrelation also diminishes frequency chirping events of the eigenmode.

    The second model is called FOXTAIL, and it has a wider regime of validity than the bump-on-tail model. FOXTAIL is able to simulate systems with multiple eigenmodes, and it includes effects of different individual particle orbits relative to the wave fields. Simulations with FOXTAIL and the nonlinear bump-on-tail model have been compared in order to determine the regimes of validity of the bump-on-tail model quantitatively. Studies of two-mode scenarios confirmed the expected consequences of a fulfillment of the Chirikov criterion for resonance overlap. The influence of ICRH on the eigenmode-energetic ion system has also been studied, showing qualitatively similar effects as seen by the presence of phase decorrelation.

    Another model, describing the efficiency of fast wave current drive, has been developed in order to study the influence of passive components close to the antenna, in which currents can be induced by the antenna generated wave field. It was found that the directivity of the launched wave, averaged over model parameters, was lowered by the presence of passive components in general, except for low values of the single pass damping of the wave, where the directivity was slightly increased, but reversed in the toroidal direction.