1234 1 - 50 of 189
rss atomLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
  • Gemvik, Agnes
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Hållbar utveckling, miljövetenskap och teknik.
    Winter Activity Park: A feasibility study of the implementation in Södertälje municipality2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Today there is a growing issue of physical inactiveness, especially during the winter period, leading to various health issues. The new-found concept of a Winter Activity Park can be a possible solution for Swedish municipalities, as a means to increase spontaneous physical activity, and improve public health, during the winter season. The concept has originated from the need for increased spontaneous physical activity during winter and includes creating a physical activity facility with the usage of artificial snow. Södertälje is a municipality where a Winter Activity Park could act as a solution in order to increase spontaneous physical activity.

    The aim of the study is to assess the feasibility of a Winter Activity Park, using artificial snow, in Södertälje municipality. The study sets out to determine what is technically required in order to implement a Winter Activity Park, and by including two specific sites in Södertälje (Stadsparken and Tveta Friluftsgård), it can be assessed how well each site is meeting the requirements. Additionally, the study sets out to assess what possible environmental impacts the implementation may lead to at each site, as well as determining how artificial snow should be obtained to have the least contribution to climate change.

    The results show that there are several technical requirements that need to be fulfilled at a site in order to implement a Winter Activity Park. Neither of the chosen sites are currently meeting all the technical requirements for a Winter Activity Park, although it is considered possible for both sites to be able to fulfil them at a later stage. By conducting an EIA, it was concluded that Tveta Friluftsgård were likely to experience less negative environmental consequences from the implementation of a Winter Activity Park, compared to Stadsparken. By calculating the carbon footprint for the alternative ways of obtaining artificial snow, it was concluded that producing snow directly at a site, using a lake as a water source, would be the most favourable, in terms of least contribution to climate change.

    As a final conclusion, it was determined that a Winter Activity Park is feasible in Södertälje municipality, with Tveta Friluftsgård and production directly at the site seemingly being the best way to go. However, depending on what the purpose of the Winter Activity Park is, other sites and methods of obtaining snow can be more suitable. In order to create a more comprehensive view of the feasibility and sustainability of the concept of a Winter Activity Park, future studies should investigate social and economic aspects related to the concept.

    Fulltekst (pdf)
    fulltext
  • Chih-Chin Teng, Chih-Chin Teng
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Hållbar utveckling, miljövetenskap och teknik.
    Implementing simplified LCA software in heavy-duty vehicle design: An evaluation study of LCA data quality for supporting sustainable design decisions2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Simplified life cycle assessment (LCA) method quickly delivers an estimation of the product’s life- cycle impacts without intense data requirements, which are taken as a practical tool in the early stage of product development (PD) to support sustainable decisions. However, obstacles are to integrate the LCA tool efficiently and effectively into the designers’ daily workflows. To give a comprehensive overview of the potential challenges in integrating simplified LCA software to vehicle PD processes, the research conducts accessibility, intrinsic, contextual and representational data quality evaluation of the two vehicle-LCA software, Granta Selector and the Modular-LCA Kit, by the means of interviews, case studies and usability testing.

    From the four data quality evaluation, the results demonstrate (1) the importance of the company’s collaboration with the software developers to ensure the software’s accessibility; (2) the data accuracy constraints of the software due to the generic database and over-simplified methods; (3) the vehicle designer engineers reactions in the two software’s data fulfilments in conducting the complicated vehicle LCA models; and (4) the LCA results’ effectiveness in supporting sustainable design decisions.

    Overall, the two simplified LCA software’s reliability is sufficient merely in the very beginning stage of PD while the user satisfaction and effectiveness of the simplified LCA data are positive for the design engineers with a basic level of sustainability knowledge. Still, there is a need of systematic strategies in integrating the software into PD processes. A three-pillar strategy that covers the approaches of company administrative policy, software management, and promotion, and LCA and vehicle data life-cycle management could tackle the data gaps and limitations of the software and company. Based on this strategy, the research proposes an example roadmap for Scania.

    Fulltekst (pdf)
    fulltext
  • Ding, Kaijie
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Hållbar utveckling, miljövetenskap och teknik.
    Arsenite removal from contaminated water by different sorbent materials2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Arsenic (As) contamination is a worldwide problem, and millions of people are suffering from it. There are two major inorganic forms of As in waters: arsenate(V) and arsenite(III), and adsorption to a sorbent material may be an efficient method to handle them. In this study, we focused on As(III), the more toxic form, which predominates under reducing conditions. The As(III) removal properties of four sorbent materials: hydrotalcite, Mg−Al layered double hydroxide, amorphous aluminium hydroxide and amorphous titanium oxide, are examined from the following viewpoints: As(III) adsorption, the effects of pH, the effects of adsorbent concentration, adsorption as a function of dissolved As(III), and the effect of co-existing anions (HCO3 and PO43−).

    The maximum adsorption of As(III) to HT (0.1 mmol As(III)/g adsorbent), Mg-Al LDH (0.1 mmol As(III)/g adsorbent), am-Al(OH)3 (0.22 mmol As(III)/g adsorbent), and am-TiO2 (0.21 mmol As(III)/g adsorbent) occurred at pH 7.5, 7, 7, 8, respectively. At this pH, approximately 20%, 62%, 35%, and 98.3%, respectively, of the added As(III) was adsorbed. When the As(III) to sorbent ratio was increased, the adsorption was instead around 7% to am-Al(OH)3 (2.2 mmol As(III)/g adsorbent), and 46.3% to am-TiO2 (2.1 mmol As(III)/g adsorbent). These figures show that am-TiO2is the most efficient sorbent for As(III) adsorption of the four materials tested, Mg-Al LDH is second best, while HT and am-Al(OH)3 are not suitable for As(III) removal.

    The adsorption of As(III) to Mg-Al LDH as a function of dissolved As(III) could be adequately described by a linear equation, suggesting that As(III) adsorption to Mg-Al LDH was governed by anion exchange. As a result, the co-existing anions (HCO3- and PO43-) showed a significant influence on As(III) adsorption to Mg-Al LDH.

    Considering the interfering effects of co-existing anions on am-TiO2, HCO3 did not influence As(III) adsorption, while PO43- caused a slight but clear competition effect. Overall, am-TiO2 would be the best choice of these four materials in contact with As-contaminated groundwater due to its superior As(III) removal properties and the limited competition from co-existing anions on As(III) adsorption.

    Fulltekst (pdf)
    fulltext
  • Novotny, Ondrej
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Byggvetenskap, Bro- och stålbyggnad.
    Stabilisation of Steel Structures by Diaphragm Action of Trapezoidal Sheeting2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The main idea behind stabilisation by diaphragm action is to create a deep beam thatcan resist horizontal loads acting on a structure. It is achieved by connecting the sheetingto the primary structure of the roof so that the edge beams resist the bending momentthrough normal stresses and the sheeting resists the shear force through shear stresses.An essential assumption of a functional diaphragm system is a sufficient connectionbetween the sheeting and the primary structure.In this thesis, the global behaviour of a diaphragm, as well as possible failure modesof sheeting of different thicknesses, are investigated.A laboratory experiment is conducted in the first part of the thesis. In the experiment,two types of self-drilling screws in combination with three different plate thicknesses aretested in shear. Slip flexibility of the screws is evaluated based on the experiment andcompared to flexibilities according to European Recommendations for the Application ofMetal Sheeting Acting as a Diaphragm.In the second part of the thesis, a finite element simulation is performed on two structuresto investigate the global behaviour of the diaphragm. The experimental results are,additionally, implemented into the finite element model.In the last part, hand calculations are performed based on analytical formulas given inEuropean Recommendations for the Application of Metal Sheeting Acting as a Diaphragmand the maximum horizontal displacement of the structure is compared to the FEAresults.

    Fulltekst (pdf)
    fulltext
  • Samuelsson, Elin
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    A Confidence Measure for Deep Convolutional Neural Network Regressors2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Deep convolutional neural networks can be trained to estimate gaze directions from eye images. However, such networks do not provide any information about the reliability of its predictions. As uncertainty estimates could enable more accurate and reliable gaze tracking applications, a method for confidence calculation was examined in this project.

    This method had to be computationally efficient for the gaze tracker to function in real-time, without reducing the quality of the gaze predictions. Thus, several state-of-the-art methods were abandoned in favor of Mean-Variance Estimation, which uses an additional neural network for estimating uncertainties. This confidence network is trained based on the accuracy of the gaze rays generated by the primary network, i.e. the prediction network, for different eye images. Two datasets were used for evaluating the confidence network, including the effect of different design choices.

    A main conclusion was that the uncertainty associated with a predicted gaze direction depends on more factors than just the visual appearance of the eye image. Thus, a confidence network taking only this image as input can never model the regression problem perfectly.

    Despite this, the results show that the network learns useful information. In fact, its confidence estimates outperform those from an established Monte Carlo method, where the uncertainty is estimated from the spread of gaze directions from several prediction networks in an ensemble.

    Fulltekst (pdf)
    fulltext
  • Bai, Dingxu
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Persuasive Technology for Appealing to the Motivation of Chinese for Garbage Classification2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The amount of garbage is rapidly increasing in China  nowadays, which has led to severe environmental  problems. Garbage recycling is imperative in dealing with the problem and garbage classification plays an essential role in the recycling process. However, most Chinese have weak awareness of garbage classification. Therefore, how to motivate Chinese to classify garbage correctly has been of concern. Persuasive technology is used to intentionally change user’s behaviors or/and attitudes by designed persuasion. This project is aimed at exploring whether persuasive technology can be employed to appeal to the motivation of Chinese for garbage classification. Persuasive strategy in this thesis is based on literature research. Besides, a prototype is developed to implement the persuasive strategy. Through user testing and interviews, it shows that persuasive technology can indeed appeal to the users’ motivation for garbage classification.

    Fulltekst (pdf)
    fulltext
  • Radesjö, Fanny
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Collision Avoidance for a Fence Inspecting Drone Operating at an Airport2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Many important facilities are surrounded by security fences that need to be regularly inspected for damage. To automate this task it has been proposed to use a drone equipped with a camera. The images taken by the camera would be analyzed using deep learning and thereby no human labor would be required except for when there are damages that need to be repaired.

    While flying autonomously along the fence it is important that the drone does not collide with unexpected obstacles. The aim of this project is to propose a suitable algorithm to avoid collisions during the inspection mission.

    To gather information about the environment and detect potential obstacles a stereo camera is used. Since the purpose of the drone is to capture images of the fence one of the criteria for evaluating the method is that the avoidance maneuvers should not cause the drone to miss more of the fence than necessary. The method chosen is based on the concept of collision cones. The idea is to approximate a bounding box around the obstacle and create a cone with the outline of the bounding box as the base and the drone position as the top point. The drone is restricted from flying in a direction inside the cone and is thereby forced to find a path around the obstacle.

    The algorithm is implemented and tested in simulation. From the simulation results, it is concluded that the algorithm is able to prevent collisions. Also, conclusions about how the parameter values should be chosen for the real drone are made.

    Fulltekst (pdf)
    fulltext
  • Björk, Kim
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    A comparison of compiler strategies for serverless functions written in Kotlin2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Hosting options for software have become more modifiable with time, from requiring on-premises hardware to now being able to tailor a flexible hosting solution in a public cloud. One of the latest hosting solution option is the serverless architecture, entailing running software only when invoked.

    Public cloud providers such as Amazon, Google and IBM provide serverless solutions, yet none of them provide an official support for the popular language Kotlin. This may be one of the reasons why the performance of Kotlin in a serverless environment is, to our knowledge, relatively undocumented. This thesis investigates the performance of serverless functions written in Kotlin when run with different compiler strategies, with the purpose of contributing knowledge within this subject. One Just-In-Time compiler, the Hotspot Java Virtual Machine (JVM), is set against an Ahead-Of-Time compiler, GraalVM.

    A benchmark suite was constructed and two serverless functions were created for each benchmark, one run with the JVM and one run as a native image, created by GraalVM. The benchmark tests are divided in two categories. One consisting of cold starts, an occurrence that arises the first time a serverless function is invoked or has not been invoked for a longer period of time, causing the need for certain start-up actions. The other category is warm starts, a run when the function has recently been invoked and the cold starts start-up actions are not needed.

    The result showed a faster total runtimes and less memory requirements for GraalVM-enabled functions during cold starts. During warm starts the GraalVM-enabled functions still required less memory but the JVM functions showed large improvements over time, making the total runtimes more similar to their GraalVM-enabled counterparts.

    Fulltekst (pdf)
    fulltext
  • Syropoulos, Nikolaos
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    DigiJag: A participatory design of an e-learning and social platform accessible to users with moderate intellectual disabilities2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Background: Digital education provides lifelong learning opportunities and acquisition of new skills and the importance of developing flexible e-learning platforms, taking into account the endusers’ needs and experiences, is high. People with intellectual disabilities are the most likely group to encounter challenges related to school while they are underrepresented in studies in web accessibility and digital education. DigiJag project in Sweden aims to the development of an e-learning and social platform accessible to users with moderate intellectual disabilities.

    Purpose: The purpose of this exploratory research study was to define the user’s needs and identify the key features used in the design of an e-learning and social platform accessible to users with moderate ID.

    Methods: Participatory methods were used to provide all stakeholders an influence on the final design. Qualitative data was collected from two focus groups with professional experts. Qualitative data was also collected from a series of processes together with students with ID including three voting sessions, individual observation barrier walkthroughs and two cognitive walkthroughs for heuristic evaluation of DigiJag platform’s Hi-Fi interactive prototype. Qualitative content theme analysis was used, and an iterative prototype design process was applied.

    Results: The study revealed themes related to direct and indirect stakeholders, values that reflect user needs, key features to be supported by the platform and suggestions for data collection methods from students with intellectual disabilities using participatory design processes. Additionally, aesthetic elements, social themes for the platform, information regarding the user experience of students with intellectual disabilities with existing interactive digital tools as well as results from heuristic evaluation of an interactive Hi-Fi prototype were also part of the study’s findings.

    Discussion: By involving people with intellectual disabilities in different stages of this study, we managed to give them voice in the design process and also to distribute power from designers and experts to users with intellectual disabilities.

    Fulltekst (pdf)
    fulltext
  • Löw, Simon
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Automatic Generation of Patient-specific Gamma Knife Treatment Plans for Vestibular Schwannoma Patients2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In this thesis a new fully automatic process for radiotherapy treatment planning with the Leksell Gamma Knife is implemented and evaluated: First, a machine learning algorithm is trained to predict the desired dose distribution, then a convex optimization problem is solved to find the optimal Gamma Knife configuration using the prediction as the optimization objective.

    The method is evaluated using Bayesian linear regression, Gaussian processes and convolutional neural networks for the prediction. Therefore, the quality of the generated treatment plans is compared to the clinical treatment plans and then the relationship between the prediction and optimization result is analyzed.

    The convolutional neural network model shows the best performance and predicts realistic treatment plans, which only change minimally under the optimization and are on the same quality level as the clinical plans. The Bayesian linear regression model generates plans on the same quality level, but is not able to predict realistic treatment plans, which leads to substantial changes to the plan under the optimization. The Gaussian process shows the worst performance and is not able to predict plans of the same quality as the clinical plans.

    Fulltekst (pdf)
    fulltext
  • Hollstrand, Paulina
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Supporting Pre-Productionin Game Development: Process Mapping and Principles for a Procedural Prototyping Tool2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Game development involves both traditional software activities combined with creative work. As a result, game design practices are characterized by an extensive process of iterative development and evaluation, where prototyping is a major component to test and evaluate the player experience. Content creation for the virtual world the players inhabit is one of the most time-consuming aspects of production.

    This experimental research study focuses on analyzing and formulating challenges and desired properties in a prototyping tool based on Procedural Content Generation to assist game designers in their early ideation process. To investigate this, a proof of concept was iteratively developed based on information gathered from interviews and evaluations with world designers during a conceptual and design study. The final user study assessed the tool’s functionalities and indicated its potential utility in enhancing the designers’ content exploration and risk management during pre-production activities. Key guidelines for the tool’s architecture can be distilled into: (1) A modular design approach supports balance between content controllability and creativity. (2) Design levels and feature representation should combine and range between Micro (specific) to Macro (high-level, abstract). The result revealed challenges in combining exploration of the design space with optimization and refinement of content.

    However, the thesis specifically concentrated on one specific type of content city generation, to represent world design content generation. To fully understand the generalizable aspects different types of game content types of game content would need to be covered in further research.

    Fulltekst (pdf)
    fulltext
  • Adzemovic, Haris
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    A template-based approach to automatic program repair of Sonarqube static warnings2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    As the prevalence of software continues to increase, so does the number of bugs. Static analysis can uncover a multitude of bugs in a reasonable time frame compared to its dynamic equivalent but is plagued by other issues such as high false-positive alert rates and unclear calls to action, making it underutilized considering the benefits it can bring with its bug-finding abilities. This thesis aims to reduce the shortcomings of static analysis by implementing and evaluating a template-based approach of automatically repairing bugs found by static analysis. The approach is evaluated by automatically creating and submitting patches containing bug fixes to open-source projects already utilizing static analysis. The results show that the approach and developed tool are valuable and decrease the number of bugs of the kind which static analysis finds. Two possible ways of integrating the created tool into existing developer workflows are prototyped and a comparison with a similar tool is performed to showcase the different approaches’ differences, strengths and weaknesses.

    Fulltekst (pdf)
    fulltext
  • Odelstad, Elias
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Rosetta spacecraft potential and activity evolution of comet 67P2016Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The plasma environment of an active comet provides a unique setting for plasma physics research. The complex interaction of newly created cometary ions with the flowing plasma of the solar wind gives rise to a plethora of plasma physics phenomena, that can be studied over a large range of activity levels as the distance to the sun, and hence the influx of solar energy, varies. In this thesis, we have used measurements of the spacecraft potential by the Rosetta Langmuir probe instrument (LAP) to study the evolution of activity of comet 67P/Churyumov-Gerasimenko as it approached the sun from 3.6 AU in August 2014 to 2.1 AU in March 2015. The measurements are validated by cross-calibration to a fully independent measurement by an electrostatic analyzer, the Ion Composition Analyzer (ICA), also on board Rosetta.

    The spacecraft was found to be predominantly negatively charged during the time covered by our investigation, driven so by a rather high electron temperature of ~5 eV resulting from the low collision rate between electrons and the tenuous neutral gas. The spacecraft potential exhibited a clear covariation with the neutral density as measured by the ROSINA Comet Pressure Sensor (COPS) on board Rosetta. As the spacecraft potential depends on plasma density and electron temperature, this shows that the neutral gas and the plasma are closely coupled. The neutral density and negative spacecraft potential were higher in the northern hemisphere, which experienced summer conditions during the investigated period due to the nucleus spin axis being tilted toward the sun. In this hemisphere, we found a clear variation of spacecraft potential with comet longitude, exactly as seen for the neutral gas, with coincident peaks in neutral density and spacecraft potential magnitude roughly every 6 h, when sunlit parts of the neck region of the bi- lobed nucleus were in view of the spacecraft. The plasma density was estimated to have increased during the investigated time period by a factor of 8-12 in the northern hemisphere and possibly as much as a factor of 20-44 in the southern hemisphere, due to the combined effects of seasonal changes and decreasing heliocentric distance.

    The spacecraft potential measurements obtained by LAP generally exhibited good correlation with the estimates from ICA, confirming the accuracy of both of these instruments for measurements of the spacecraft potential. 

    Fulltekst (pdf)
    FULLTEXT01
  • Schwack, Fabian
    Untersuchungen zumBetriebsverhalten oszillierender Wälzlager am Beispiel von Rotorblattlagern in Windenergieanlagen2020Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Pitch bearings in wind turbines are typical examples for oscillating rollingelement bearings. On the one hand pitch bearings are aected by intentionaloscillations due to the adaption of the aerodynamic angle of attack of the rotorblade. On the other hand, unwanted micro-oscillations (vibrations) are causedby turbulence when the wind turbine is at standstill. The occurring damagemechanisms can reduce the service life of component and, in the worst casescenario, lead to a failure of the whole technical system.On basis of the operating conditions of a reference wind turbine and loadsimulations for the reference pitch bearing, the occurring wear phenomena areanalysed. For this purpose, experimental investigations for the identicationand determination of inuencing parameters are undertaken. The experimentalinvestigations are carried out on angular contact ball bearings of the size7208. The operating conditions of the reference pitch bearing are transferredto the test bearing size using a scaling method. A time-dependent connectionof wear phenomena is established by the incubation hypotheses. The inuenceof the oscillation amplitude and frequency on occurring wear becomesclear through the investigations. In addition, experiments are carried out onfour-point contact ball bearing with a pitch diameter of 675 mm.The results of the experimental investigations are used to set up a simulationmodel which is focused on the contact kinematics between rolling elementsand raceways. The occurrence and the form of the wear can be explainedwith the simulated frictional work. Using the fully parametric structure of themodel, the inuence of the bearing geometry and input parameters on thefrictional work can be analysed.In order to determine the inuence of the type of lubricating grease, afurther series of experiments is carried out. In this experiments common usedlubricants for pitch bearings are tested. The results show that none of theexamined lubricants can prevent wear for all operating conditions. However,statements on suitable grease compositions can be made for certain operatingparameters.The results show the relationship between wear and operating parametersin pitch bearings based on experimental investigations and simulations.

    Fulltekst (pdf)
    fulltext
  • Odelstad, Elias
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Plasma environment of an intermediately active comet: Evolution and dynamics observed by ESA's Rosetta spacecraft at 67P/Churyumov-Gerasimenko2018Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The subject of this thesis is the evolution and dynamics of the plasma environment of a moderately active comet before, during and after its closest approach to the Sun. For over 2 years in 2014-2016, the European Space Agency’s Rosetta spacecraft followed the comet 67P/Churyumov-Gerasimenko at distances typically between a few tens and a few hundred kilometers from the nucleus, the longest and closest inspection of a comet ever made. Its payload included a suite of five plasma instruments (the Rosetta Plasma Consortium, RPC), providing unprecedented in-situ measurements of the plasma environment in the inner coma of a comet.

    In the first two studies, we use spacecraft potential measurements by the Langmuir probe instrument (LAP) to study the evolving cometary plasma environment. The spacecraft potential was mostly negative, often below -10 V and sometimes below -20 V, revealing the presence of warm (around 5-10 eV) coma photoelectrons, not effectively cooled by collisions with the relatively tenuous coma gas. The magnitude of the negative spacecraft potential depends on the electron density and traced heliocentric, cometocentric, seasonal and diurnal variations in cometary outgassing, consistent with production at or inside the cometocentric distance of the spacecraft as the dominant source of the observed plasma.

    In the third study, we investigate ion velocities and electron temperatures in the diamagnetic cavity of the comet, combining LAP and Mutual Impedance Probe (MIP) measurements. Ion velocities were generally in the range 2-4 km/s, well above the expected neutral velocity of at most 1 km/s. Thus, the ions were (at least partially) decoupled from the neutrals already inside the diamagnetic cavity, indicating that ion-neutral drag was not responsible for balancing the outside magnetic pressure. The spacecraft potential was around -5 V throughout the cavity, showing that warm electrons were consistently present inside the cavity, at least as far in as Rosetta reached. Also, cold (below about 0.1 eV) electrons were consistently observed throughout the cavity, but less consistently in the surrounding region, suggesting that while Rosetta never entered a region of efficient collisional cooling of electrons, such a region was possibly not far away during the cavity crossings. Also, it reinforces the idea of previous authors that the intermittent nature of the cold electron component was due to filamentation of this cold plasma at or near the cavity boundary, possibly related to an instability of this boundary.

    Finally, we report the detection of large-amplitude, quasi-harmonic density-fluctuations with associated magnetic field oscillations in association with asymmetric plasma and magnetic field enhancements previously found in the region surrounding the diamagnetic cavity, occurring predominantly on their descending slopes. Typical frequencies are around 0.1 Hz, i.e. about ten times the water and half the proton gyro-frequency, and the associated magnetic field oscillations, when detected, have wave vectors perpendicular to the background magnetic field. We suggest that they are Ion Bernstein waves, possibly excited by the drift-cyclotron instability resulting from the strong plasma inhomogeneities this region.

    Fulltekst (pdf)
    FULLTEXT01
  • Ljung, Carolina
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Svedberg, Maria
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Estimation of Loss Given Default Distributions for Non-Performing Loans Using Zero-and-One Inflated Beta Regression Type Models2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This thesis investigates three different techniques for estimating loss given default of non-performing consumer loans. This is a contribution to a credit risk evaluation model compliant with the regulations stipulated by the Basel Accords, regulating the capital requirements of European financial institutions. First, multiple linear regression is applied, and thereafter, zero-and-one inflated beta regression is implemented in two versions, with and without Bayesian inference. The model performances confirm that modeling loss given default data is challenging, however, the result shows that the zero-and-one inflated beta regression is superior to the other models in predicting LGD. Although, it shall be recognized that all models had difficulties in distinguishing low-risk loans, while the prediction accuracy of riskier loans, resulting in larger losses, were higher. It is further recommended, in future research, to include macroeconomic variables in the models to capture economic downturn conditions as well as adopting decision trees, for example by applying machine learning.

    Fulltekst (pdf)
    fulltext
  • Olanders, David
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Optimal Time-Varying Cash Allocation2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    A payment is the most fundamental aspect of a trade that involves funds. In recent years, the development of new payment services has accelerated significantly as the world has moved further into the digital era. This transition has led to an increased demand of digital payment solutions that can handle trades across the world. As trades today can be agreed at any time wherever the payer and payee are located, the party that mediates payments must at any time to be available in order to mediate an agreed exchange. This requires the payment service provider to always have funds available in the required countries and currencies in order for trades to always be available. This thesis concerns how a payment service provider can reallocate capital in a cost efficient way in order for trades to always be available.

    Traditionally, the reallocation of capital is done in a rule-based manner, which discard the cost dimension and thereby only focus on the reallocation itself. This thesis concerns methods to optimally reallocate capital focusing on the cost of transferring capital within the network. Where the concerned methods has the potential of transferring capital in a far more cost efficient way.

    When mathematically formulating the reallocation decisions as an optimization problem, the cost function is formulated as a linear program with both Boolean and real constraints. This impose non-feasibility of locating the optimal solution using traditional methods for linear programs, why developed traditional and more advanced methods were used. The model was evaluated based on a large number of simulations in comparison with the performance of a rule-based reallocation system.

    The developed model provides a significant cost reduction compared to the rule-based approach and thereby outperforms the traditional reallocation system. Future work should focus on expanding the model by broadening the available transfer options, by increasing the considered uncertainty via a bayesian treatment and finally by considering all cost aspects of the network.

    Fulltekst (pdf)
    fulltext
  • Brodd, Tobias
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Modeling the Relation Between Implied and Realized Volatility2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Options are an important part in today's financial market. It's therefore of high importance to be able to understand when options are overvalued and undervalued to get a lead on the market. To determine this, the relation between the volatility of the underlying asset, called realized volatility, and the market's expected volatility, called implied volatility, can be analyzed. In this thesis five models were investigated for modeling the relation between implied and realized volatility. The five models consisted of one Ornstein–Uhlenbeck model, two autoregressive models and two artificial neural networks. To analyze the performance of the models, different accuracy measures were calculated for out-of-sample forecasts. Signals from the models were also calculated and used in a simulated options trading environment to get a better understanding of how well they perform in trading applications. The results suggest that artificial neural networks are able to model the relation more accurately compared to more traditional time series models. It was also shown that a trading strategy based on forecasting the relation was able to generate significant profits. Furthermore, it was shown that profits could be increased by combining a forecasting model with a signal classification model.

    Fulltekst (pdf)
    fulltext
  • Ungsgård, Oscar
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Stochastic Modelling of Cash Flows in Private Equity2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    An investment in a private equity is any investment made in a financial asset that is not publicly traded. As such these assets are very difficult to value and also give rise to great difficulty when it comes to quantifying risk. In a typical private equity investment the investor commits a prespecified amount of capital to a fund, this capital will be called upon as needed by the fund and eventually capital will be returned to the investor by the fund as it starts to turn a profit.

    In this way a private equity investment can be boiled down to consist of two cash flows, the contributions to the fund and distributions from the fund to the investor. These cash flows are usually made within a prespecified time frame but at unspecified intervals and amounts. As an investor in a fund, carrying too little liquid assets when contributions are called upon will cause trouble, but carrying significantly more than needed is also not desirable as it represents a loss in potential revenue from having less capital in more profitable investments.

    The goal of this thesis was to attempt to find a way to reliably model these cash flows and to find a way to represent the results in a meaningful way for the benefit of the investor by constructing value at risk like risk measures for the necessary liquid capital to carry at a given time in case contributions are called upon. It was found that the distributions could be modelled very well with the chosen stochastic processes, both as it related to predicting the average path of the cash flows and as it relates to modelling the variability of them. Contrary to this it was found that the contributions could not be modelled very well. The reason for this was found to be an observed lag in the speed of contributions at the start of the funds lifetime, this lag was not taken into account when constructing the stochastic model and hence it produced simulated cash flows not in line with those used in the calibration.

    Fulltekst (pdf)
    fulltext
  • Ahlqvist, Sigge
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Arriaza-Hult, Matteus
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    How to measure the degree of PIT-ness in a credit rating system for a low default portfolio?2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In order to be compliant with the Basel regulations, banks need to compute two probabilities of default (PDs): point-in-time (PIT) and through-the-cycle (TTC). The aim is to explain fluctuations in the rating system, which are expected to be affected by systematic and idiosyncratic factors. Being able to, in an objective manner, determine whether the rating system is taking the business cycle - i.e the systematic factors - into account when assigning a credit rating to an obligor is useful in order to evaluate PD-models. It is also necessary for banks in order to use their own risk parameters and models instead of standardized models, which is desirable for most banks as it could lower capital requirements.

    This thesis propose a new measure for the degree of PIT-ness. This measure aims to be especially useful when examining a low default portfolio. The proposed measure is built on a markovian approach of the credit rating system. In order to find a suitable measure for a low default portfolio, the proposed measure takes into account credit rating migrations, the seasonal component of the business cycle and time series analysis. An analysis were performed between two different credit portfolios in order to interpret results.

    The results demonstrated that the degree of PIT-ness was lower in a low default portfolio in comparison with a sampled portfolio which displayed a greater amount of rating migrations with a larger magnitude. The importance of considering relevant macroeconomic variables to represent the business cycle was mentioned amongst the most important factors to consider in order to receive reliable results given the proposed measure.

    Fulltekst (pdf)
    fulltext
  • Thorstensson, Linnea
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Clustering Methods as a Recruitment Tool for Smaller Companies2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    With the help of new technology it has become much easier to apply for a job. Reaching out to a larger audience also results in a lot of more applications to consider when hiring for a new position. This has resulted in that many big companies uses statistical learning methods as a tool in the first step of the recruiting process. Smaller companies that do not have access to the same amount of historical and big data sets do not have the same opportunities to digitalise their recruitment process. Using topological data analysis, this thesis explore how clustering methods can be used on smaller data sets in the early stages of the recruitment process. It also studies how the level of abstraction in data representation affects the results. The methods seem to perform well on higher level job announcements but struggles on basic level positions. It also shows that the representation of candidates and jobs has a huge impact on the results.

    Fulltekst (pdf)
    fulltext
  • Evholt, David
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Larsson, Oscar
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Generative Adversarial Networks and Natural Language Processing for Macroeconomic Forecasting2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Macroeconomic forecasting is a classic problem, today most often modeled using time series analysis. Few attempts have been made using machine learning methods, and even fewer incorporating unconventional data, such as that from social media. In this thesis, a Generative Adversarial Network (GAN) is used to predict U.S. unemployment, beating the ARIMA benchmark on all horizons. Furthermore, attempts at using Twitter data and the Natural Language Processing (NLP) model DistilBERT are performed. While these attempts do not beat the benchmark, they do show promising results with predictive power.

    The models are also tested at predicting the U.S. stock index S&P 500. For these models, the Twitter data does improve the accuracy and shows the potential of social media data when predicting a more erratic index with less seasonality that is more responsive to current trends in public discourse. The results also show that Twitter data can be used to predict trends in both unemployment and the S&P 500 index. This sets the stage for further research into NLP-GAN models for macroeconomic predictions using social media data.

    Fulltekst (pdf)
    fulltext
  • Masoudi, Meysam
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Kommunikationssystem, CoS, Radio Systems Laboratory (RS Lab).
    Sofia Lisi, Shari
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Kommunikationssystem, CoS.
    Cavdar, Cicek
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Kommunikationssystem, CoS, Radio Systems Laboratory (RS Lab).
    Cost-effective Migration towards Virtualized C-RAN with Scalable Fronthaul Design2020Inngår i: IEEE Systems Journal, ISSN 1932-8184, E-ISSN 1937-9234, ISSN 1932-8184Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Migration from distributed to centralized radio access networks (C-RANs) can be expensive in terms of capital expenditures due to the initial investment while it has lower operational expenditures due to pooling baseband processing into the cloud and reduced power consumption. Partial centralization can be also an option by employing network function splitting and keeping lower physical layer functions co-located with the radio units. This increases the power consumption but relaxes the high capacity requirement in the fronthaul. It is not intuitive which migration strategy is more cost effective. In this paper, we formulate a pool placement optimization problem as an integer linear programming (ILP), which minimizes the total cost of ownership (TCO), and evaluate the migration cost to C-RAN with both full centralization of network functions, and partial centralization by using function splitting. We define a network upgrade optimization problem, by adding new cells to the network, as a revisited version of the original optimization problem to evaluate the upgradability of the architectures. We solve the problem with both ILP for optimality, and genetic algorithm for scalability. Simulation results show that partial centralization results in optimal TCO with lower crossover time compared to C-RAN with full centralization.

    Fulltekst (pdf)
    fulltext
  • Koski, Alexander
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Randomly perturbing the bytecode of white box cryptography implementations in an attempt to mitigate side-channel attacks2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This study takes one step further towards constructing a tool able to automatically amplify the security on your cryptographic implementations. In white box cryptography the encryption key is hidden inside the encryption algorithm out of plain sight. An attacker can try to extract the secret key by conducting a side channel attack, differential computational analysis, which many white boxes are vulnerable to. The technique to increase security explored in this study consists of randomly with different probabilities perturb the white box by adding the value one to a variable inside the running white box. This does break the correctness of the output on all the three tested white box implementations to various extents, but some perturbations can be made which maintains fairly high correctness on the output of the program. Running a white box with perturbations does not cause any significant increase in execution time. Out of more than 100 000 possible perturbation points 25 were chosen to be investigated further. In one case the security of a perturbed white box increased, but in four similar cases the white box was made more insecure, otherwise no change in security was observed. A more sophisticated technique of identifying the best point to insert perturbations are therefore required in order to further investigate how to increase the security of your cryptographic implementations while still maintaining a fairly high correctness despite the program experiencing random perturbations.

    Fulltekst (pdf)
    fulltext
  • Hollstrand, Paulina
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Supporting Pre-Production in Game Development: Process Mapping and Principles for a Procedural Prototyping Tool2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Game development involves both traditional software activities combined with creative work. As a result, game design practices are characterized by an extensive process of iterative development and evaluation, where prototyping is a major component to test and evaluate the player experience. Content creation for the virtual world the players inhabit is one of the most time-consuming aspects of production.

    This experimental research study focuses on analyzing and formulating challenges and desired properties in a prototyping tool based on Procedural Content Generation to assist game designers in their early ideation process. To investigate this, a proof of concept was iteratively developed based on information gathered from interviews and evaluations with world designers during a conceptual and design study. The final user study assessed the tool’s functionalities and indicated its potential utility in enhancing the designers’ content exploration and risk management during pre-production activities. Key guidelines for the tool’s architecture can be distilled into: (1) A modular design approach supports balance between content controllability and creativity. (2) Design levels and feature representation should combine and range between Micro (specific) to Macro (high-level, abstract). The result revealed challenges in combining exploration of the design space with optimization and refinement of content.

    However, the thesis specifically concentrated on one specific type of content - city generation, to represent world design content generation. To fully understand the generalizable aspects different types of game content would need to be covered in further research.

    Fulltekst (pdf)
    fulltext
  • Löw, Simon
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Automatic Generation of Patient-specific Gamma Knife Treatment Plans for Vestibular Schwannoma Patients2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In this thesis a new fully automatic process for radiotherapy treatment planning with the Leksell Gamma Knife is implemented and evaluated: First, a machine learning algorithm is trained to predict the desired dose distribution, then a convex optimization problem is solved to find the optimal Gamma Knife configuration using the prediction as the optimization objective.

    The method is evaluated using Bayesian linear regression, Gaussian processes and convolutional neural networks for the prediction. Therefore, the quality of the generated treatment plans is compared to the clinical treatment plans and then the relationship between the prediction and optimization result is analyzed.

    The convolutional neural network model shows the best performance and predicts realistic treatment plans, which only change minimally under the optimization and are on the same quality level as the clinical plans. The Bayesian linear regression model generates plans on the same quality level, but is not able to predict realistic treatment plans, which leads to substantial changes to the plan under the optimization. The Gaussian process shows the worst performance and is not able to predict plans of the same quality as the clinical plans

    Fulltekst (pdf)
    fulltext
  • Adzemovic, Haris
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    A template-based approach to automatic program repair of Sonarqube static warnings2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    As the prevalence of software continues to increase, so does the number of bugs. Static analysis can uncover a multitude of bugs in a reasonable time frame compared to its dynamic equivalent but is plagued by other issues such as high false-positive alert rates and unclear calls to action, making it underutilized considering the benefits it can bring with its bug-finding abilities. This thesis aims to reduce the shortcomings of static analysis by implementing and evaluating a template-based approach of automatically repairing bugs found by static analysis. The approach is evaluated by automatically creating and submitting patches containing bug fixes to open-source projects already utilizing static analysis. The results show that the approach and developed tool are valuable and decrease the number of bugs of the kind which static analysis finds. Two possible ways of integrating the created tool into existing developer workflows are prototyped and a comparison with a similar tool is performed to showcase the different approaches’ differences, strengths and weaknesses

    Fulltekst (pdf)
    fulltext
  • Syropoulos, Nikolaos
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    DigiJag: A participatory design of an e-learning and social platform accessible to users with moderate intellectual disabilities2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Background: Digital education provides lifelong learning opportunities and acquisition of new skills and the importance of developing flexible e-learning platforms, taking into account the end- users’ needs and experiences, is high. People with intellectual disabilities are the most likely group to encounter challenges related to school while they are underrepresented in studies in web accessibility and digital education. DigiJag project in Sweden aims to the development of an e-learning and social platform accessible to users with moderate intellectual disabilities.

    Purpose: The purpose of this exploratory research study was to define the user’s needs and identify the key features used in the design of an e-learning and social platform accessible to users with moderate ID.

    Methods: Participatory methods were used to provide all stakeholders an influence on the final design. Qualitative data was collected from two focus groups with professional experts. Qualitative data was also collected from a series of processes together with students with ID including three voting sessions, individual observation barrier walkthroughs and two cognitive walkthroughs for heuristic evaluation of DigiJag platform’s Hi-Fi interactive prototype. Qualitative content theme analysis was used, and an iterative prototype design process was applied.

    Results: The study revealed themes related to direct and indirect stakeholders, values that reflect user needs, key features to be supported by the platform and suggestions for data collection methods from students with intellectual disabilities using participatory design processes. Additionally, aesthetic elements, social themes for the platform, information regarding the user experience of students with intellectual disabilities with existing interactive digital tools as well as results from heuristic evaluation of an interactive Hi-Fi prototype were also part of the study’s findings.

    Discussion: By involving people with intellectual disabilities in different stages of this study, we managed to give them voice in the design process and also to distribute power from designers and experts to users with intellectual disabilities.

    Fulltekst (pdf)
    fulltext
  • Broiles, Thomas W.
    et al.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Burch, J. L.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Chae, K.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Clark, G.
    Johns Hopkins Univ, Appl Phys Lab, 11100 Johns Hopkins Rd, Laurel, MD 20723 USA..
    Cravens, T. E.
    Univ Kansas, Dept Phys & Astron, 1450 Jayhawk Blvd, Lawrence, KS 66045 USA..
    Eriksson, Anders
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Fuselier, S. A.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA.;Univ Texas San Antonio, Dept Phys & Astron, San Antonio, TX 78249 USA..
    Frahm, R. A.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Gasc, S.
    Univ Bern, Phys Inst, Sidlerstr 5, CH-3012 Bern, Switzerland..
    Goldstein, R.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Henri, P.
    CNRS, LPC2E, F-45071 Orleans, France..
    Koenders, C.
    Tech Univ Carolo Wilhelmina Braunschweig, Inst Geophys & Extraterr Phys, Mendelssohnstr 3, D-38106 Braunschweig, Germany..
    Livadiotis, G.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Mandt, K. E.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA.;Univ Texas San Antonio, Dept Phys & Astron, San Antonio, TX 78249 USA..
    Mokashi, P.
    Southwest Res Inst, Div Space Sci & Engn, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Nemeth, Z.
    Wigner Res Ctr Phys, H-1121 Budapest, Hungary..
    Odelstad, Elias
    Uppsala universitet, Rymd- och plasmafysik.
    Rubin, M.
    Univ Bern, Phys Inst, Sidlerstr 5, CH-3012 Bern, Switzerland..
    Samara, M.
    Goddard Space Flight Ctr, Heliophys Div, 8800 Greenbelt Rd, Greenbelt, MD 20771 USA..
    Statistical analysis of suprathermal electron drivers at 67P/Churyumov-Gerasimenko2016Inngår i: Monthly notices of the Royal Astronomical Society, ISSN 0035-8711, E-ISSN 1365-2966, Vol. 462, s. S312-S322Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We use observations from the Ion and Electron Sensor (IES) on board the Rosetta spacecraft to study the relationship between the cometary suprathermal electrons and the drivers that affect their density and temperature. We fit the IES electron observations with the summation of two kappa distributions, which we characterize as a dense and warm population (similar to 10 cm(-3) and similar to 16 eV) and a rarefied and hot population (similar to 0.01 cm(-3) and similar to 43 eV). The parameters of our fitting technique determine the populations' density, temperature, and invariant kappa index. We focus our analysis on the warm population to determine its origin by comparing the density and temperature with the neutral density and magnetic field strength. We find that the warm electron population is actually two separate sub-populations: electron distributions with temperatures above 8.6 eV and electron distributions with temperatures below 8.6 eV. The two sub-populations have different relationships between their density and temperature. Moreover, the two sub-populations are affected by different drivers. The hotter sub-population temperature is strongly correlated with neutral density, while the cooler sub-population is unaffected by neutral density and is only weakly correlated with magnetic field strength. We suggest that the population with temperatures above 8.6 eV is being heated by lower hybrid waves driven by counterstreaming solar wind protons and newly formed, cometary ions created in localized, dense neutral streams. To the best of our knowledge, this represents the first observations of cometary electrons heated through wave-particle interactions.

    Fulltekst (pdf)
    FULLTEXT01
  • Edberg, Niklas J. T.
    et al.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Alho, M.
    Aalto Univ, Sch Elect Engn, Dept Radio Sci & Engn, POB 13000, FI-00076 Aalto, Finland..
    André, Mats
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Andrews, David J.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Behar, E.
    Swedish Inst Space Phys, Box 812, SE-98128 Kiruna, Sweden..
    Burch, J. L.
    Southwest Res Inst, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Carr, C. M.
    Imperial Coll London, Exhibit Rd, London SW7 2AZ, England..
    Cupido, E.
    Imperial Coll London, Exhibit Rd, London SW7 2AZ, England..
    Engelhardt, Ilka. A. D.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Eriksson, Anders I.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Glassmeier, K. -H
    Goetz, C.
    TU Braunschweig, Inst Geophys & Extraterr Phys, Mendelssohnstr 3, D-38106 Braunschweig, Germany..
    Goldstein, R.
    Southwest Res Inst, 6220 Culebra Rd, San Antonio, TX 78238 USA..
    Henri, P.
    Lab Phys & Chim Environm & Espace, F-45071 Orleans 2, France..
    Johansson, Fredrik L.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Koenders, C.
    TU Braunschweig, Inst Geophys & Extraterr Phys, Mendelssohnstr 3, D-38106 Braunschweig, Germany..
    Mandt, K.
    Southwest Res Inst, 6220 Culebra Rd, San Antonio, TX 78238 USA.;Univ Texas San Antonio, Dept Phys & Astron, San Antonio, TX 78249 USA..
    Moestl, C.
    Austrian Acad Sci, Space Res Inst, Schmiedlstr 6, A-8042 Graz, Austria..
    Nilsson, H.
    Swedish Inst Space Phys, Box 812, SE-98128 Kiruna, Sweden..
    Odelstad, Elias
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Richter, I.
    TU Braunschweig, Inst Geophys & Extraterr Phys, Mendelssohnstr 3, D-38106 Braunschweig, Germany..
    Wedlund, C. Simon
    Univ Oslo, Dept Phys, Box 1048 Blindern, N-0316 Oslo, Norway..
    Wieser, G. Stenberg
    Swedish Inst Space Phys, Box 812, SE-98128 Kiruna, Sweden..
    Szego, K.
    Wigner Res Ctr Phys, Konkoly Thege Miklos Ut 29-33, H-1121 Budapest, Hungary..
    Vigren, Erik
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Volwerk, M.
    Austrian Acad Sci, Space Res Inst, Schmiedlstr 6, A-8042 Graz, Austria..
    CME impact on comet 67P/Churyumov-Gerasimenko2016Inngår i: Monthly notices of the Royal Astronomical Society, ISSN 0035-8711, E-ISSN 1365-2966, Vol. 462, s. S45-S56Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present Rosetta observations from comet 67P/Churyumov-Gerasimenko during the impact of a coronal mass ejection (CME). The CME impacted on 2015 Oct 5-6, when Rosetta was about 800 km from the comet nucleus, and 1.4 au from the Sun. Upon impact, the plasma environment is compressed to the level that solar wind ions, not seen a few days earlier when at 1500 km, now reach Rosetta. In response to the compression, the flux of suprathermal electrons increases by a factor of 5-10 and the background magnetic field strength increases by a factor of similar to 2.5. The plasma density increases by a factor of 10 and reaches 600 cm(-3), due to increased particle impact ionization, charge exchange and the adiabatic compression of the plasma environment. We also observe unprecedentedly large magnetic field spikes at 800 km, reaching above 200 nT, which are interpreted as magnetic flux ropes. We suggest that these could possibly be formed by magnetic reconnection processes in the coma as the magnetic field across the CME changes polarity, or as a consequence of strong shears causing Kelvin-Helmholtz instabilities in the plasma flow. Due to the limited orbit of Rosetta, we are not able to observe if a tail disconnection occurs during the CME impact, which could be expected based on previous remote observations of other CME-comet interactions.

    Fulltekst (pdf)
    FULLTEXT01
  • Galand, M.
    et al.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Heritier, K. L.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Odelstad, Elias
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Henri, P.
    Univ Orleans, CNRS, LPC2E, 3A,Ave Rech Sci, F-45071 Orleans 2, France..
    Broiles, T. W.
    Southwest Res Inst, PO Drawer 28510, San Antonio, TX 78228 USA..
    Allen, A. J.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Altwegg, K.
    Univ Bern, Phys Inst, Sidlerstr 5, CH-3012 Bern, Switzerland..
    Beth, A.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Burch, J. L.
    Southwest Res Inst, PO Drawer 28510, San Antonio, TX 78228 USA..
    Carr, C. M.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Cupido, E.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Eriksson, Anders
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Glassmeier, K. -H
    Johansson, Fredrik L.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Lebreton, J. -P
    Mandt, K. E.
    Southwest Res Inst, PO Drawer 28510, San Antonio, TX 78228 USA..
    Nilsson, H.
    Swedish Inst Space Phys, POB 812, SE-98128 Kiruna, Sweden..
    Richter, I.
    TU Braunschweig, Inst Geophys & Extraterr Phys, Mendelssohnstr 3, D-38106 Braunschweig, Germany..
    Rubin, M.
    Univ Bern, Phys Inst, Sidlerstr 5, CH-3012 Bern, Switzerland..
    Sagnieres, L. B. M.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Schwartz, S. J.
    Imperial Coll London, Dept Phys, Prince Consort Rd, London SW7 2AZ, England..
    Semon, T.
    Univ Bern, Phys Inst, Sidlerstr 5, CH-3012 Bern, Switzerland..
    Tzou, C. -Y
    Vallieres, X.
    Univ Orleans, CNRS, LPC2E, 3A,Ave Rech Sci, F-45071 Orleans 2, France..
    Vigren, Erik
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Wurz, P.
    Univ Bern, Phys Inst, Sidlerstr 5, CH-3012 Bern, Switzerland..
    Ionospheric plasma of comet 67P probed by Rosetta at 3 au from the Sun2016Inngår i: Monthly notices of the Royal Astronomical Society, ISSN 0035-8711, E-ISSN 1365-2966, Vol. 462, s. S331-S351Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We propose to identify the main sources of ionization of the plasma in the coma of comet 67P/Churyumov-Gerasimenko at different locations in the coma and to quantify their relative importance, for the first time, for close cometocentric distances (< 20 km) and large heliocentric distances (> 3 au). The ionospheric model proposed is used as an organizing element of a multi-instrument data set from the Rosetta Plasma Consortium (RPC) plasma and particle sensors, from the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis and from the Microwave Instrument on the Rosetta Orbiter, all on board the ESA/Rosetta spacecraft. The calculated ionospheric density driven by Rosetta observations is compared to the RPC-Langmuir Probe and RPC-Mutual Impedance Probe electron density. The main cometary plasma sources identified are photoionization of solar extreme ultraviolet (EUV) radiation and energetic electron-impact ionization. Over the northern, summer hemisphere, the solar EUV radiation is found to drive the electron density - with occasional periods when energetic electrons are also significant. Over the southern, winter hemisphere, photoionization alone cannot explain the observed electron density, which reaches sometimes higher values than over the summer hemisphere; electron-impact ionization has to be taken into account. The bulk of the electron population is warm with temperature of the order of 7-10 eV. For increased neutral densities, we show evidence of partial energy degradation of the hot electron energy tail and cooling of the full electron population.

    Fulltekst (pdf)
    FULLTEXT01
  • Odelstad, Elias
    et al.
    Uppsala universitet, Institutionen för fysik och astronomi.
    Eriksson, Anders I.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Edberg, Niklas J. T.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Johansson, Fredrik
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Vigren, Erik
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    André, Mats
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Tzou, C. -Y
    Univ Bern, Phys Inst, Bern, Switzerland.
    Carr, C.
    Univ London Imperial Coll Sci Technol & Med, Space & Atmospher Phys Grp, London, England..
    Cupido, E.
    Univ London Imperial Coll Sci Technol & Med, Space & Atmospher Phys Grp, London, England..
    Evolution of the plasma environment of comet 67P from spacecraft potential measurements by the Rosetta Langmuir probe instrument2015Inngår i: Geophysical Research Letters, ISSN 0094-8276, E-ISSN 1944-8007, Vol. 42, nr 23Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We study the evolution of the plasma environment of comet 67P using measurements of the spacecraft potential from early September 2014 (heliocentric distance 3.5 AU) to late March 2015 (2.1 AU) obtained by the Langmuir probe instrument. The low collision rate keeps the electron temperature high (similar to 5 eV), resulting in a negative spacecraft potential whose magnitude depends on the electron density. This potential is more negative in the northern (summer) hemisphere, particularly over sunlit parts of the neck region on the nucleus, consistent with neutral gas measurements by the Cometary Pressure Sensor of the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis. Assuming constant electron temperature, the spacecraft potential traces the electron density. This increases as the comet approaches the Sun, most clearly in the southern hemisphere by a factor possibly as high as 20-44 between September 2014 and January 2015. The northern hemisphere plasma density increase stays around or below a factor of 8-12, consistent with seasonal insolation change.

    Fulltekst (pdf)
    FULLTEXT01
  • Vigren, Erik
    et al.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Galand, M.
    Univ London Imperial Coll Sci Technol & Med, Dept Phys, London SW7 2AZ, England..
    Eriksson, Anders I.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Edberg, Niklas J. T.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Odelstad, Elias
    Uppsala universitet, Institutionen för fysik och astronomi.
    Schwartz, S. J.
    Univ London Imperial Coll Sci Technol & Med, Dept Phys, London SW7 2AZ, England..
    On The Electron-To-Neutral Number Density Ratio In The Coma Of Comet 67P/Churyumov-Gerasimenko: Guiding Expression And Sources For Deviations2015Inngår i: Astrophysical Journal, ISSN 0004-637X, E-ISSN 1538-4357, Vol. 812, nr 1, artikkel-id 54Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We compute partial photoionization frequencies of H2O, CO2, and CO, the major molecules in the coma of comet 67P/Churyumov-Gerasimenko, the target comet of the ongoing ESA Rosetta mission. Values are computed from Thermosphere Ionosphere Mesosphere Energy and Dynamics/Solar EUV Experiment solar EUV spectra for 2014 August 1, 2015 March 1, and for perihelion (2015 August, as based on prediction). From the varying total photoionization frequency of H2O, as computed from 2014 August 1 to 2015 May 20, we derive a simple analytical expression for the electron-to-neutral number density ratio as a function of cometocentric. and heliocentric distance. The underlying model assumes radial movement of the coma constituents and does not account for chemical loss or the presence of electric fields. We discuss various effects/processes that can cause deviations between values from the analytical expression and actual electron-to-neutral number density ratios. The analytical expression is thus not strictly meant as predicting the actual electron-to-neutral number density ratio, but is useful in comparisons with observations as an indicator of processes at play in the cometary coma.

    Fulltekst (pdf)
    FULLTEXT01
  • Yang, Lei
    et al.
    Univ Oslo, Dept Phys, Sem Soelands Vei 24,Postbox 1048, N-0317 Oslo, Norway..
    Paulsson, J. J. P.
    Univ Oslo, Dept Phys, Sem Soelands Vei 24,Postbox 1048, N-0317 Oslo, Norway..
    Wedlund, C. Simon
    Univ Oslo, Dept Phys, Sem Soelands Vei 24,Postbox 1048, N-0317 Oslo, Norway..
    Odelstad, Elias
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Edberg, Niklas J. T.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Koenders, C.
    Tech Univ Carolo Wilhelmina Braunschweig, Inst Geophys & Extraterr Phys, Mendelssohnstr 3, D-38106 Braunschweig, Germany..
    Eriksson, Anders I.
    Uppsala universitet, Institutet för rymdfysik, Uppsalaavdelningen.
    Miloch, W. J.
    Univ Oslo, Dept Phys, Sem Soelands Vei 24,Postbox 1048, N-0317 Oslo, Norway..
    Observations of high-plasma density region in the inner coma of 67P/Churyumov-Gerasimenko during early activity2016Inngår i: Monthly notices of the Royal Astronomical Society, ISSN 0035-8711, E-ISSN 1365-2966, Vol. 462, s. S33-S44Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In 2014 September, as Rosetta transitioned to close bound orbits at 30 km from comet 67P/Churyumov-Gerasimenko, the Rosetta Plasma Consortium Langmuir probe (RPC-LAP) data showed large systematic fluctuations in both the spacecraft potential and the collected currents. We analyse the potential bias sweeps from RPC-LAP, from which we extract three sets of parameters: (1) knee potential, that we relate to the spacecraft potential, (2) the ion attraction current, which is composed of the photoelectron emission current from the probe as well as contributions from local ions, secondary emission, and low-energy electrons, and (3) an electron current whose variation is, in turn, an estimate of the electron density variation. We study the evolution of these parameters between 4 and 3.2 au in heliocentric and cometocentric frames. We find on September 9 a transition into a high-density plasma region characterized by increased knee potential fluctuations and plasma currents to the probe. In conjunction with previous studies, the early cometary plasma can be seen as composed of two regions: an outer region characterized by solar wind plasma, and small quantities of pick-up ions, and an inner region with enhanced plasma densities. This conclusion is in agreement with other RPC instruments such as RPC-MAG, RPC-IES and RPC-ICA, and numerical simulations.

    Fulltekst (pdf)
    FULLTEXT01
  • Jablecka, Marta
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Modelling CLV in the Insurance Industry Using Deep Learning Methods2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This paper presents a master’s thesis project in which deep learning methods are used to both calculate and subsequently attempt to maximize Customer Lifetime Value (CLV) for an insurance provider’s customers. Specifically, the report investigates whether panel data comprised of customers monthly insurance policy subscription history can be used with Recurrent Neural Networks (RNN) to achieve better predictive performance than the naïve forecasting model. In order to do this, the use of Long Short Term Memory (LSTM) for anomaly detection in a supervised manner is explored to determine which customers are more likely to change their subscription policies. Whether Deep Reinforcement Learning (DRL) can be used in this setting in order to maximize CLV is also investigated.

    The study found that the best RNN models outperformed the naïve model in terms of precision on the data set containing customers which are more likely to change their subscription policies. The models suffer, however, from several notable limitations so further research is advised. Selecting those customers was shown to be successful in terms of precision but not sensitivity which suggest that there is a room for improvement. The DRL models did not show a substantial improvement in terms of CLV maximization.

    Fulltekst (pdf)
    fulltext
  • Hendey Bröte, Erik
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Duration-Weighted Carbon Footprint Metrics and Carbon Risk Factor for Credit Portfolios2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Current standard carbon footprint metrics attribute responsibility for a firm’s green house gas (GHG) emitting activities equally between an entity’s equity and debt. This study introduces a set of novel duration-weighted metrics which take into consideration the length of financing provided. These measure show promise for reporting footprints of debt portfolios, but further study of methodological robustness should be performed before they can be adopted widely. The measures are also attractive from a risk perspective as they are linearly dependent on duration and therefore are sensitive to yields. A factor portfolio is constructed using the new carbon intensity measure, and corporate yields are studied in a linear factor model. Other factors included derive from Nelson-Siegel parameterizations of US Treasury rates and the USD swap spread curve. Following the Fama-MacBeth procedure, the carbon factor is found not to persist over the 10-year period.

    Fulltekst (pdf)
    fulltext
  • Janovic, Filip
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Singh, Paul
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Modelling default probabilities: The classical vs. machine learning approach2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Fintech companies that offer Buy Now, Pay Later products are heavily dependent on accurate default probability models. This is since the fintech companies bear the risk of customers not fulfilling their obligations. In order to minimize the losses incurred to customers defaulting several machine learning algorithms can be applied but in an era in which machine learning is gaining popularity, there is a vast amount of algorithms to select from. This thesis aims to address this issue by applying three fundamentally different machine learning algorithms in order to find the best algorithm according to a selection of chosen metrics such as ROCAUC and precision-recall AUC. The algorithms that were compared are Logistic Regression, Random Forest and CatBoost. All these algorithms were benchmarked against Klarna's current XGBoost model. The results indicated that the CatBoost model is the optimal one according to the main metric of comparison, the ROCAUC-score. The CatBoost model outperformed the Logistic Regression model by seven percentage points, the Random Forest model by three percentage points and the XGBoost model by one percentage point.

    Fulltekst (pdf)
    fulltext
  • Lundin, Filip
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Wahlgren, Markus
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Capturing Tail Risk in a Risk Budgeting Model2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Risk budgeting, in contrast to conventional portfolio management strategies, is all about distributing the risk between holdings in a portfolio. The risk in risk budgeting is traditionally measured in terms of volatility and a Gaussian distribution is commonly utilized for modeling return data. In this thesis, these conventions are challenged by introducing different risk measures, focusing on tail risk, and other probability distributions for modeling returns.

    Two models for forming risk budgeting portfolios that acknowledge tail risk were chosen. Both these models were based on CVaR as a risk measure, in line with what previous researchers have used. The first model modeled returns with their empirical distribution and the second with a Gaussian mixture model. The performance of these models was thereafter evaluated. Here, a diverse set of asset classes, several risk budgets, and risk targets were used to form portfolios. Based on the performance, measured in risk-adjusted returns, it was clear that the models that took tail risk into account in general had superior performance in relation to the standard model. Nevertheless, it should be noted that the superiority was significantly higher for portfolios that constituted of mainly high-risk assets than for portfolios with more low-risk assets and also that the superior performance did not hold in all time periods considered. It was also clear that the model that used the empirical distribution to model returns performed better than the model based on an assumption of returns belonging to the Gaussian mixture model when the portfolio consisted of more assets with heavier tails.

    Fulltekst (pdf)
    fulltext
  • Lindberg, Jonas
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Wolfert Källman, Isak
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Vehicle Collision Risk Prediction Using a Dynamic Bayesian Network2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This thesis tackles the problem of predicting the collision risk for vehicles driving in complex traffic scenes for a few seconds into the future. The method is based on previous research using dynamic Bayesian networks to represent the state of the system.

    Common risk prediction methods are often categorized into three different groups depending on their abstraction level. The most complex of these are interaction-aware models which take driver interactions into account. These models often suffer from high computational complexity which is a key limitation in practical use. The model studied in this work takes interactions between drivers into account by considering driver intentions and the traffic rules in the scene.

    The state of the traffic scene used in the model contains the physical state of vehicles, the intentions of drivers and the expected behaviour of drivers according to the traffic rules. To allow for real-time risk assessment, an approximate inference of the state given the noisy sensor measurements is done using sequential importance resampling. Two different measures of risk are studied. The first is based on driver intentions not matching the expected maneuver, which in turn could lead to a dangerous situation. The second measure is based on a trajectory prediction step and uses the two measures time to collision (TTC) and time to critical collision probability (TTCCP).

    The implemented model can be applied in complex traffic scenarios with numerous participants. In this work, we focus on intersection and roundabout scenarios. The model is tested on simulated and real data from these scenarios. %Simulations of these scenarios is used to test the model. In these qualitative tests, the model was able to correctly identify collisions a few seconds before they occur and is also able to avoid false positives by detecting the vehicles that will give way.

     

    Fulltekst (pdf)
    fulltext
  • Qader, Aso
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Shiver, William
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Developing an Advanced Internal Ratings-Based Model by Applying Machine Learning2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Since the regulatory framework Basel II was implemented in 2007, banks have been allowed to develop internal risk models for quantifying the capital requirement. By using data on retail non-performing loans from Hoist Finance, the thesis assesses the Advanced Internal Ratings-Based approach. In particular, it focuses on how banks active in the non-performing loan industry, can risk-classify their loans despite limited data availability of the debtors. Moreover, the thesis analyses the effect of the maximum-recovery period on the capital requirement. In short, a comparison of five different mathematical models based on prior research in the field, revealed that the loans may be modelled by a two-step tree model with binary logistic regression and zero-inflated beta-regression, resulting in a maximum-recovery period of eight years. Still it is necessary to recognize the difficulty in distinguishing between low- and high-risk customers by primarily assessing rudimentary data about the borrowers. Recommended future amendments to the analysis in further research would be to include macroeconomic variables to better capture the effect of economic downturns.

    Fulltekst (pdf)
    fulltext
  • Andersson, Aron
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Mirkhani, Shabnam
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Portfolio Performance Optimization Using Multivariate Time Series Volatilities Processed With Deep Layering LSTM Neurons and Markowitz2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The stock market is a non-linear field, but many of the best-known portfolio optimization algorithms are based on linear models. In recent years, the rapid development of machine learning has produced flexible models capable of complex pattern recognition. In this paper, we propose two different methods of portfolio optimization; one based on the development of a multivariate time-dependent neural network,thelongshort-termmemory(LSTM),capable of finding lon gshort-term price trends. The other is the linear Markowitz model, where we add an exponential moving average to the input price data to capture underlying trends. The input data to our neural network are daily prices, volumes and market indicators such as the volatility index (VIX).The output variables are the prices predicted for each asset the following day, which are then further processed to produce metrics such as expected returns, volatilities and prediction error to design a portfolio allocation that optimizes a custom utility function like the Sharpe Ratio. The LSTM model produced a portfolio with a return and risk that was close to the actual market conditions for the date in question, but with a high error value, indicating that our LSTM model is insufficient as a sole forecasting tool. However,the ability to predict upward and downward trends was somewhat better than expected and therefore we conclude that multiple neural network can be used as indicators, each responsible for some specific aspect of what is to be analysed, to draw a conclusion from the result. The findings also suggest that the input data should be more thoroughly considered, as the prediction accuracy is enhanced by the choice of variables and the external information used for training.

    Fulltekst (pdf)
    fulltext
  • Lagerström, Erik
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Magne Schrab, Michael
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    An Empirical Study of Modern Portfolio Optimization2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Mean variance optimization has shortcomings making the strategy far from optimal from an investor’s perspective. The purpose of the study is to conduct an empirical investigation as to how modern methods of portfolio optimization address the shortcomings associated with mean variance optimization. Equal risk contribution, the Most diversified portfolioand a modification of the Minimum variance portfolio are considered as alternatives to the mean variance model. Portfolio optimization models introduced are explained in detail and solved using the optimization algorithms Cyclical coordinate descent and Alternating direction method of multipliers. Through implementation and backtesting using a diverse set of indices representing various asset classes, the study shows that the mean variance model suffers from high turnover and sensitivity to input parameters in comparison to the modern alternatives. The sophisticated asset allocation models equal risk contribution and the most diversified portfolio do not rely on expected return as an input parameter, which is seen as an advantage, and are not affected to the same extent by the shortcomings associated with mean variance optimization. The paper concludes by discussing the findings critically and suggesting ideas for further research.

    Fulltekst (pdf)
    fulltext
  • Eriksson, Ivar
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Image Distance Learning for Probabilistic Dose–Volume Histogram and Spatial Dose Prediction in Radiation Therapy Treatment Planning2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Construction of radiotherapy treatments for cancer is a laborious and time consuming task. At the same time, when presented with a treatment plan, an oncologist can quickly judge whether or not it is suitable. This means that the problem of constructing these treatment plans is well suited for automation.

    This thesis investigates a novel way of automatic treatment planning. The treatment planning system this pipeline is constructed for provides dose mimicking functionality with probability density functions of dose–volume histograms (DVHs) and spatial dose as inputs. Therefore this will be the output of the pipeline. The input is historically treated patient scans, segmentations and spatial doses.

    The approach involves three modules which are individually replaceable with little to no impact on the remaining two modules. The modules are: an autoencoder as a feature extractor to concretise important features of a patient segmentation, a distance optimisation step to learn a distance in the previously constructed feature space and, finally, a probabilistic spatial dose estimation module using sparse pseudo-input Gaussian processes trained on voxel features.

    Although performance evaluation in terms of clinical plan quality was beyond the scope of this thesis, numerical results show that the proposed pipeline is successful in capturing salient features of patient geometry as well as predicting reasonable probability distributions for DVH and spatial dose. Its loosely connected nature also gives hope that some parts of the pipeline can be utilised in future work.

    Fulltekst (pdf)
    fulltext
  • Carlsson, Filip
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Lindgren, Philip
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Deep Scenario Generation of Financial Markets2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The goal of this thesis is to explore a new clustering algorithm, VAE-Clustering, and examine if it can be applied to find differences in the distribution of stock returns and augment the distribution of a current portfolio of stocks and see how it performs in different market conditions.

    The VAE-clustering method is as mentioned a newly introduced method and not widely tested, especially not on time series. The first step is therefore to see if and how well the clustering works. We first apply the algorithm to a dataset containing monthly time series of the power demand in Italy. The purpose in this part is to focus on how well the method works technically. When the model works well and generates proper results with the Italian Power Demand data, we move forward and apply the model on stock return data. In the latter application we are unable to find meaningful clusters and therefore unable to move forward towards the goal of the thesis.

    The results shows that the VAE-clustering method is applicable for time series. The power demand have clear differences from season to season and the model can successfully identify those differences. When it comes to the financial data we hoped that the model would be able to find different market regimes based on time periods. The model is though not able distinguish different time periods from each other. We therefore conclude that the VAE-clustering method is applicable on time series data, but that the structure and setting of the financial data in this thesis makes it to hard to find meaningful clusters.

    The major finding is that the VAE-clustering method can be applied to time series. We highly encourage further research to find if the method can be successfully used on financial data in different settings than tested in this thesis.  

    Fulltekst (pdf)
    fulltext
  • Malmgren, Erik
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Zhang, Annie
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Risk Modeling of Sustainable Mutual Funds Using GARCH Time Series2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The demand for sustainable investments has seen an increase in recent years. There is considerable literature covering backtesting of the performance and risk of socially responsible investments (SRI) compared to conventional investments. However, literature that models and examines the risk characteristics of SRI compared to conventional investments is limited. This thesis seeks to model and compare the risk of mutual funds scoring in the top 10% in terms of sustainability, based on Morningstar Portfolio Sustainability Score, to those scoring in the bottom 10%. We create one portfolio consisting of the top 10% funds and one portfolio consisting of the bottom 10%, for European and global mutual funds separately, thus in total creating 4 portfolios. The analysis is based on data of the funds' returns and Morningstar Portfolio Sustainability Scores during December 2015 to August 2019. Investigating several GARCH models, we find an ARMA-GARCH model with skewed Student's t-distribution as innovation distribution to give the best fit to the daily log-returns of each portfolio. Based on the fitted ARMA-GARCH models with skewed Student's t-distribution, we use a parametric bootstrap method to compute 95% confidence intervals for the difference in long-run volatility and value at risk (VaR) between the portfolios with high and low Morningstar Portfolio Sustainability Scores. This is performed on the portfolios of European and global funds separately. We conclude that, for global and European funds respectively, no significant difference in terms of long-run volatility and VaR is found between the funds in each of the 10% ends of the Morningstar Portfolio Sustainability Score.

    Fulltekst (pdf)
    fulltext
  • Larsson, Sofia
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    A Study of the Loss Landscape and Metastability in Graph Convolutional Neural Networks2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Many novel graph neural network models have reported an impressive performance on benchmark dataset, but the theory behind these networks is still being developed. In this thesis, we study the trajectory of Gradient descent (GD) and Stochastic gradient descent (SGD) in the loss landscape of Graph neural networks by replicating Xing et al. [1] study for feed-forward networks. Furthermore, we empirically examine if the training process could be accelerated by an optimization algorithm inspired from Stochastic gradient Langevin dynamics and what effect the topology of the graph has on the convergence of GD by perturbing its structure. We find that the loss landscape is relatively flat and that SGD does not encounter any significant obstacles during its propagation. The noise-induced gradient appears to aid SGD in finding a stationary point with desirable generalisation capabilities when the learning rate is poorly optimized. Additionally, we observe that the topological structure of the graph plays a part in the convergence of GD but further research is required to understand how.

    Fulltekst (pdf)
    fulltext
  • Hilmersson, Markus
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    A Study Evaluating the Liquidity Risk for Non-Maturity Deposits at a Swedish Niche Bank2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Since the 2008 financial crisis, the interest for the subject area of modelling non-maturity deposits has been growing quickly. The area has been widely analysed from the perspective of a traditional bank where customers foremost have transactional and salary deposits. However, in recent year the Swedish banking sector has become more digitized. This has opened up opportunities for more niche banking actors to establish themselves on the market. Therefore, this study aims to examine how the theories developed and previously used in modelling liquidity volumes at traditional banks can be used at a niche bank focused on savings and investments. In this study the topics covered are short-rate modelling using Vasicek's model, liquidity volume modelling using SARIMA and SARIMAX modelling as well as liquidity risk modelling using an approach developed by Kalkbrener and Willing. When modelling the liquidity volumes the data set was divided depending on account and customer type into six groups, for four out of these the models had lower in and out of set prediction errors using SARIMA models for only two of the six models were there improvements made to the in and out of set prediction error using SARIMAX models. Finally, the resulting minimization of liquidity volume forecasting 5 years in the future gave reasonable and satisfactory results.

    Fulltekst (pdf)
    fulltext
  • Hanna, Peter
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Swartling, Erik
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Anomaly Detection of Time Series Data using Unsupervised Machine Learning Methods: A Clustering-Based Approach2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    For many companies in the manufacturing industry, attempts to find damages in their products is a vital process, especially during the production phase. Since applying different machine learning techniques can further aid the process of damage identification, it becomes a popular choice among companies to make use of these methods to enhance the production process even further. For some industries, damage identification can be heavily linked with anomaly detection of different measurements. In this thesis, the aim is to construct unsupervised machine learning models to identify anomalies on unlabeled measurements of pumps using high frequency sampled current and voltage time series data. The measurement can be split up into five different phases, namely the startup phase, three duty point phases and lastly the shutdown phase. The approach is based on clustering methods, where the main algorithms of use are the density-based algorithms DBSCAN and LOF. Dimensionality reduction techniques, such as featur

    Fulltekst (pdf)
    fulltext
  • Åkerström, Otto
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Multi-Agent System for Coordinated Defence2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Today defence systems are becoming more complex as technology advances and it is of great importance to explore new ways of solving problems and keep national defence current. In particular, Artificial Intelligence (AI) is used in an increasing number of industries such as logistic solutions, inventory management and defence. This thesis will evaluate the possibility to use Reinforcement Learning (RL) in an Air Defence Coordination(ADC) scenario at Saab AB. To evaluate RL, a simplified ADC-scenario is considered and solved using two different methods, Q-learning and Deep Q-learning (DQL).

    The results of the two methods are discussed as well as the limitations in scope and complexity for Q-learning. Deep Q-learning, on the other hand shows to be relatively easy to apply to more complicated scenarios. Finally, one last experiment with a far more complex scenario is constructed in order to show the scalability of DQL and create a foundation for future work in this field.

    Fulltekst (pdf)
    fulltext
  • Wikland, Love
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Early-Stage Prediction of Lithium-Ion Battery Cycle Life Using Gaussian Process Regression2020Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Data-driven prediction of battery health has gained increased attention over the past couple of years, in both academia and industry. Accurate early-stage predictions of battery performance would create new opportunities regarding production and use. Using data from only the first 100 cycles, in a data set of 124 cells where lifetimes span between 150 and 2300 cycles, this work combines parametric linear models with non-parametric Gaussian process regression to achieve cycle lifetime predictions with an overall accuracy of 8.8% mean error. This work presents a relevant contribution to current research as this combination of methods is previously unseen when regressing battery lifetime on a high dimensional feature space. The study and the results presented further show that Gaussian process regression can serve as a valuable contributor in future data-driven implementations of battery health predictions.

    Fulltekst (pdf)
    fulltext