Change search
Refine search result
1234567 101 - 150 of 464
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Dai, Elin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Güleryüz, Lara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Factors that influence condominium pricing in Stockholm: A regression analysis: A regression analysis2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis aims to examine which factors that are of significance when forecasting the selling price of condominiums in Stockholm city. Through the use of multiple linear regression, response variable transformation, and a multitude of methods for refining the model fit, a conclusive, out of sample validated model with a confidence level of 95% was obtained. To conduct the statistical methods, the software R was used.

    This study is limited to the districts of inner city Stockholm with the postal codes 112-118, and the final model can only be applied to this area as the postal codes are included as regressors in the model. The time period in which the selling price was analyzed varied between January 2014 and April 2019, in which the volatility of the time value of money has not been taken into account for the time period. The final model included the following variables as the ones having an impact on the selling price: floor, living area, monthly fee, construction year, district of the city.

  • 102. Damasso, M.
    et al.
    Del Sordo, Fabio
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Proxima Centauri reloaded: Unravelling the stellar noise in radial velocities2017In: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 599, article id A126Article in journal (Refereed)
    Abstract [en]

    Context. The detection and characterisation of Earth-like planets with Doppler signals of the order of 1 m s-1 currently represent one of the greatest challenge for extrasolar-planet hunters. As results for such findings are often controversial, it is desirable to provide independent confirmations of the discoveries. Testing different models for the suppression of non-Keplerian stellar signals usually plaguing radial velocity data is essential to ensuring findings are robust and reproducible. Aims. Using an alternative treatment of the stellar noise to that discussed in the discovery paper, we re-analyse the radial velocity dataset that led to the detection of a candidate terrestrial planet orbiting the star Proxima Centauri. We aim to confirm the existence of this outstanding planet, and test the existence of a second planetary signal. Methods. Our technique jointly modelled Keplerian signals and residual correlated signals in radial velocities using Gaussian processes. We analysed only radial velocity measurements without including other ancillary data in the fitting procedure. In a second step, we have compared our outputs with results coming from photometry, to provide a consistent physical interpretation. Our analysis was performed in a Bayesian framework to quantify the robustness of our findings. Results. We show that the correlated noise can be successfully modelled as a Gaussian process regression, and contains a periodic term modulated on the stellar rotation period and characterised by an evolutionary timescale of the order of one year. Both findings appear to be robust when compared with results obtained from archival photometry, thus providing a reliable description of the noise properties. We confirm the existence of a coherent signal described by a Keplerian orbit equation that can be attributed to the planet Proxima b, and provide an independent estimate of the planetary parameters. Our Bayesian analysis dismisses the existence of a second planetary signal in the present dataset.

  • 103.
    Damour, Gabriel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Information-Theoretic Framework for Network Anomaly Detection: Enabling online application of statistical learning models to high-speed traffic2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the current proliferation of cyber attacks, safeguarding internet facing assets from network intrusions, is becoming a vital task in our increasingly digitalised economies. Although recent successes of machine learning (ML) models bode the dawn of a new generation of intrusion detection systems (IDS); current solutions struggle to implement these in an efficient manner, leaving many IDSs to rely on rule-based techniques. In this paper we begin by reviewing the different approaches to feature construction and attack source identification employed in such applications. We refer to these steps as the framework within which models are implemented, and use it as a prism through which we can identify the challenges different solutions face, when applied in modern network traffic conditions. Specifically, we discuss how the most popular framework -- the so called flow-based approach -- suffers from significant overhead being introduced by its resource heavy pre-processing step. To address these issues, we propose the Information Theoretic Framework for Network Anomaly Detection (ITF-NAD); whose purpose is to facilitate online application of statistical learning models onto high-speed network links, as well as provide a method of identifying the sources of traffic anomalies. Its development was inspired by previous work on information theoretic-based anomaly and outlier detection, and employs modern techniques of entropy estimation over data streams. Furthermore, a case study of the framework's detection performance over 5 different types of Denial of Service (DoS) attacks is undertaken, in order to illustrate its potential use for intrusion detection and mitigation. The case study resulted in state-of-the-art performance for time-anomaly detection of single source as well as distributed attacks, and show promising results regarding its ability to identify underlying sources.

  • 104.
    Dastmard, Benjamin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A statistical analysis of the connection between test results and field claims for ECUs in vehicles2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The objective of this thesis is to analyse theconnection between test results and field claims of ECUs (electronic controlunits) at Scania in order to improve the acceptance criteria and evaluatesoftware testing strategies. The connection is examined through computation ofdifferent measures of dependencies such as the Pearson’s correlation, Spearman’srank correlation and Kendall’s tau. The correlations are computed from testresults in different ECU projects and considered in a predictive model based onlogistic regression. Numerical results indicate a weak connection between testresults and field claims. This is partly due to insufficient number of ECUprojects and the lack of traceability of field claims and test results. Themain conclusion confirms the present software testing strategy. Continuoussoftware release and testing results in a lower field claim and thus a betterproduct.

  • 105.
    Datye, Shlok
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Money Management Principles for Mechanical Traders2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In his five books during 1990-2009, starting with Portfolio Management Formulas, Ralph Vince made accessible to mechanical traders with limited background in mathematics various important concepts in the field of money management. During this process, he coined and popularized the terms “optimal f" and “leverage space trading model."

    This thesis provides a sound mathematical understanding of these concepts, and adds various extensions and insights of its own. It also provides practical examples of how mechanical traders can use these concepts to their advantage. Although beneficial to all mechanical traders, the examples involve trading futures contracts, and practical details such as the back-adjustment of futures prices are provided along the way.

  • 106.
    de Sauvage Vercour, Héloïse
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis and comparison of capital allocation techniques in an insurance context2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Companiesissuing insurance cover, in return for insurance premiums, face the payments ofclaims occurring according to a loss distribution. Hence, capital must be heldby the companies so that they can guarantee the fulfilment of the claims ofeach line of insurance. The increased incidence of insurance insolvencymotivates the birth of new legislations as the European Solvency II Directive.Companies have to determine the required amount of capital and the optimalcapital allocation across the different lines of insurance in order to keep therisk of insolvency at an adequate level. The capital allocation problem may betreated in different ways, starting from the insurance company balance sheet.Here, the running process and efficiency of four methods are evaluated andcompared so as to point out the characteristics of each of the methods. TheValue-at-Risk technique is straightforward and can be easily generated for anyloss distribution. The insolvency put option principle is easily implementableand is sensitive to the degree of default. The capital asset pricing model isone of the oldest reliable methods and still provides very helpful intermediateresults. The Myers and Read marginal capital allocation approach encouragesdiversification and introduces the concept of default value. Applications ofthe four methods to some fictive and real insurance companies are provided. Thethesis further analyses the sensitivity of those methods to changes in the economiccontext and comments how insurance companies can anticipate those changes.

  • 107.
    de Sá Gustafsson, Alexandra Maria-Pia Madeleine
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Delifotis, Georgios
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Optimering av försörjningskedja av frysboxar2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Dometic group is a company that produces and sells products for mobile lifestyles. One of many products is a type of coolingbox, called TE-box. The TE-box stands for approximately five percent (5%) of Dometics revenue. Long lead times and presumably avoidable high costs are some problems connected to the production of TE-bxes. There are also imposing difficulties of meeting changes in demand and planning stock. The main purpose of this paper is to examine the supply chain of Dometic, with the TE-box in focus. Recommendations including improvement strategies with respect to material, information and money flows. These will be based on the results of our mathematical analysis and approach to the presented problem. The framework of this paper is the application of relevant mathematical theory to this real-life industrial problem. The models are derived from optimization and systems theory. Data was received directly from the source.

  • 108.
    del Aguila Pla, Pol
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Inverse problems in signal processing: Functional optimization, parameter estimation and machine learning2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Inverse problems arise in any scientific endeavor. Indeed, it is seldom the case that our senses or basic instruments, i.e., the data, provide the answer we seek. It is only by using our understanding of how the world has generated the data, i.e., a model, that we can hope to infer what the data imply. Solving an inverse problem is, simply put, using a model to retrieve the information we seek from the data.

    In signal processing, systems are engineered to generate, process, or transmit signals, i.e., indexed data, in order to achieve some goal. The goal of a specific system could be to use an observed signal and its model to solve an inverse problem. However, the goal could also be to generate a signal so that it reveals a parameter to investigation by inverse problems. Inverse problems and signal processing overlap substantially, and rely on the same set of concepts and tools. This thesis lies at the intersection between them, and presents results in modeling, optimization, statistics, machine learning, biomedical imaging and automatic control.

    The novel scientific content of this thesis is contained in its seven composing publications, which are reproduced in Part II. In five of these, which are mostly motivated by a biomedical imaging application, a set of related optimization and machine learning approaches to source localization under diffusion and convolutional coding models are presented. These are included in Publications A, B, E, F and G, which also include contributions to the modeling and simulation of a specific family of image-based immunoassays. Publication C presents the analysis of a system for clock synchronization between two nodes connected by a channel, which is a problem of utmost relevance in automatic control. The system exploits a specific node design to generate a signal that enables the estimation of the synchronization parameters. In the analysis, substantial contributions to the identifiability of sawtooth signal models under different conditions are made. Finally, Publication D brings to light and proves results that have been largely overlooked by the signal processing community and characterize the information that quantized linear models contain about their location and scale parameters.

  • 109.
    del Aguila Pla, Pol
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Jaldén, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Inferences from quantized data - Likelihood logconcavityManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper, we present to the signal processing community the most general likelihood logconcavity statement for quantized data to date, together with its proof, which has never been published. In particular, we show how Prékopa’s theorem can be used to show that the likelihood for quantized linear models is jointly logconcave with respect to both its location and scale parameter in a broad range of cases. In order to show this result and explain the limitations of the proof technique, we study sets generated by combinations of points with positive semi-definite matrices whose sum is the identity.

  • 110.
    Dellner, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Can a simple model for the interaction between value and momentum traders explain how equity futures react to earnings announcements?2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 111. Dermoune, Azzouz
    et al.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rahmania, Nadji
    Estimation of the smoothing parameters in the HPMV filter2011In: ANALELE STIINT UNIV AL I CUZA, ISSN 1221-8421, Vol. 57, no 1, p. 61-75Article in journal (Refereed)
    Abstract [en]

    We suggest an optimality criterion, for choosing the best smoothing parameters for an extension of the so-called Hodrick-Prescott Multivariate (HPMV) filter. We show that this criterion admits a whole set of optimal smoothing parameters, to which belong the widely used noise-to-signal ratios. We also propose explicit consistent estimators of these noise-to-signal ratios, which in turn yield a new performant method to estimate the output gap.

  • 112.
    Dimoulkas, Ilias
    et al.
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Amelin, Mikael
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Hesamzadeh, Mohammad Reza
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Forecasting Balancing Market Prices Using Hidden Markov Models2016In: 2016 13TH INTERNATIONAL CONFERENCE ON THE EUROPEAN ENERGY MARKET (EEM), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    This paper presents a Hidden Markov Model (HMM) based method to predict the prices and trading volumes in the electricity balancing markets. The HMM are quite powerful in modelling stochastic processes where the underlying dynamics are not apparent. The proposed method provides both one hour and 12-36 hour ahead forecasts. The first is mostly useful to wind/solar producers in order to compensate their production imbalances while the second is important when submitting the offers to the day ahead markets. The results are compared to the ones from Markov-autoregressive model.

  • 113.
    Dizdarevic, Goran
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Data Fusion for Consumer Behaviour2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis analyses different methods of data fusion by fitting a chosen number of statistical models to empirical consumer data and evaluating their performance in terms of a selection of performance measures. The main purpose of the models is to predict business related consumer variables. Conventional methods such as decision trees, linear model and K-nearest neighbor have been suggested as well as single-layered neural networks and the naive Bayesian classifier. Furthermore, ensemble methods for both classification and regression have been investigated by minimizing the cross-entropy and RMSE of predicted outcomes using the iterative non-linear BFGS optimization algorithm. Time consumption of the models and methods for feature selection are also discussed in this thesis. Data regarding consumer drinking habits, transaction and purchase history and social demographic background is provided by Nepa. Evaluation of the performance measures indicate that the naive Bayesian classifier predicts consumer drinking habits most accurately whereas the random forest, although the most time consuming, is preferred when classifying the Consumer Satisfaction Index (CSI). Regression of CSI yield similar performance to all models. Moreover, the ensemble methods increased the prediction accuracy slightly in addition to increasing the time consumption. 

  • 114.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Actuarial mathematics for life contingent risks2011In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 4, p. 318-318Article, book review (Refereed)
  • 115.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nonlife actuarial models, theory, methods and evaluation2011In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 4, p. 319-320Article, book review (Refereed)
  • 116.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Regression modeling with actuarial and financial applications2011In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 4, p. 319-319Article, book review (Refereed)
  • 117.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical estimation techniques in life and disability insurance—a short overview2016In: Springer Proceedings in Mathematics and Statistics, Springer, 2016, p. 127-147Conference paper (Refereed)
    Abstract [en]

    This is a short introduction to some basic aspects of statistical estimation techniques known as graduation technique in life and disability insurance. © 2016, Springer International Publishing Switzerland 2016.

  • 118.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamadene, Said
    Hdhiri, Ibtissam
    Stochastic Impulse Control of Non-Markovian Processes2010In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 61, no 1, p. 1-26Article in journal (Refereed)
    Abstract [en]

    We consider a class of stochastic impulse control problems of general stochastic processes i.e. not necessarily Markovian. Under fairly general conditions we establish existence of an optimal impulse control. We also prove existence of combined optimal stochastic and impulse control of a fairly general class of diffusions with random coefficients. Unlike, in the Markovian framework, we cannot apply quasi-variational inequalities techniques. We rather derive the main results using techniques involving reflected BSDEs and the Snell envelope.

  • 119.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A full balance sheet two-mode optimal switching problem2015In: Stochastics: An International Journal of Probablitiy and Stochastic Processes, ISSN 1744-2508, E-ISSN 1744-2516, Vol. 87, no 4, p. 604-622Article in journal (Refereed)
    Abstract [en]

    We formulate and solve a finite horizon full balance sheet of a two-mode optimal switching problem related to trade-off strategies between expected profit and cost yields. Given the current mode, this model allows for either a switch to the other mode or termination of the project, and this happens for both sides of the balance sheet. A novelty in this model is that the related obstacles are nonlinear in the underlying yields, whereas, they are linear in the standard optimal switching problem. The optimal switching problem is formulated in terms of a system of Snell envelopes for the profit and cost yields which act as obstacles to each other. We prove the existence of a continuous minimal solution of this system using an approximation scheme and fully characterize the optimal switching strategy.

  • 120.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Full Balance Sheet Two-modes Optimal Switching problemIn: Mathematical Methods of Operations Research, ISSN 1432-2994, E-ISSN 1432-5217Article in journal (Other academic)
    Abstract [en]

    We formulate and solve a finite horizon full balance sheet two-modes optimal switching problem related to trade-off strategies between expected profit and cost yields. The optimal switching problem is formulated in terms of a system of Snell envelopes for the profit and cost yields which act as obstacles to each other. We prove existence of a continuous minimal solution of this system using an approximation scheme and fully characterize the optimal switching strategy.

  • 121.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Two-modes Mean-field Optimal Switching Problem for The Full Balance Sheet2014In: International Journal of Stochastic Analysis, ISSN 2090-3332, E-ISSN 2090-3340, article id 159519Article in journal (Refereed)
    Abstract [en]

    We consider the problem of switching a large number of production lines between two modes, high-production and low-production. The switching is based on the optimal expected profit and cost yields of the respective production lines, and considers both sides of the balance sheet. Furthermore, the production lines are all assumed to be interconnected through a coupling term, which is the average of all optimal expected yields. Intuitively, this means that each individual production line is compared to the average of all its peers which acts as a benchmark.

    Due to the complexity of the problem, we consider the aggregated optimal expected yields, where the coupling term is approximated with the mean of the optimal expected yields. This turns the problem into a two-modes optimal switching problem of mean-field type, which can be described by a system of Snell envelopes where the obstacles are interconnected and nonlinear.

    The main result of the paper is a proof of a continuous minimal solution to the system of Snell envelopes, as well as the full characterization of the optimal switching strategy.

  • 122.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Importance sampling for a Markovian intensity model with applications to credit riskManuscript (preprint) (Other academic)
    Abstract [en]

    This paper considers importance sampling for estimation of rare-event probabilities in a Markovian intensity model for credit risk. The main contribution is the design of efficient importance sampling algorithms using subsolutions of a certain Hamilton-Jacobi equation. For certain instances of the credit risk model the proposed algorithm is proved to be asymptotically optimal. The computational gain compared to standard Monte Carlo is illustrated by numerical experiments.

  • 123.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Min-max representations of viscosity solutions of Hamilton-Jacobi equations and applications in rare-event simulationManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper a duality relation between the Mañé potential and Mather's action functional is derived in the context of convex and state-dependent Hamiltonians. The duality relation is used to obtain min-max representations of viscosity solutions of first order Hamilton-Jacobi equations. These min-max representations naturally suggest classes of subsolutions of Hamilton-Jacobi equations that arise in the theory of large deviations. The subsolutions, in turn, are good candidates for designing efficient rare-event simulation algorithms.

  • 124.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A hidden Markov approach to disability insuranceManuscript (preprint) (Other academic)
    Abstract [en]

    Point and interval estimation of future disability inception and recovery rates are predominantly carried out by combining generalized linear models (GLM) with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. We suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm. We illustrate the modelling procedure by fitting the model to Swedish disability claims data.

  • 125.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Aggregation of 1-year risks in life and disability insurance2016In: Annals of Actuarial Science, ISSN 1748-4995, E-ISSN 1748-5002, Vol. 10, no 2, p. 203-221Article in journal (Refereed)
    Abstract [en]

    We consider large insurance portfolios consisting of life or disability insurance policies that are assumed independent, conditional on a stochastic process representing the economic-demographic environment. Using the conditional law of large numbers, we show that when the portfolio of liabilities becomes large enough, its value on a delta-year horizon can be approximated by a functional of the environment process. Based on this representation, we derive a semi-analytical approximation of the systematic risk quantiles of the future liability value for a homogeneous portfolio when the environment is represented by a one-factor diffusion process. For the multi-factor diffusion case, we propose two different risk aggregation techniques for a portfolio consisting of large, homogeneous pools. We give numerical results comparing the resulting capital charges with the Solvency II standard formula, based on disability claims data from the Swedish insurance company Folksam.

  • 126.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Aggregation of one-year risks in life and disability insuranceManuscript (preprint) (Other academic)
  • 127.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nonlinear reserving in life insurance: aggregation and mean-field approximationManuscript (preprint) (Other academic)
  • 128.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risk aggregation and stochastic claims reserving in disability insurance2014In: Insurance, Mathematics & Economics, ISSN 0167-6687, E-ISSN 1873-5959, Vol. 59, p. 100-108Article in journal (Refereed)
    Abstract [en]

    We consider a large, homogeneous portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using a conditional law of large numbers, we establish the connection between claims reserving and risk aggregation for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we give a numerical example where moments of present values of disability annuities are computed using finite-difference methods and Monte Carlo simulations.

  • 129.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rinne, Jonas
    Can stocks help mend the asset and liability mismatch?2010In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 2, p. 148-160Article in journal (Refereed)
    Abstract [en]

    Stocks are generally used to provide higher returns in the long run. But the dramatic fall in equity prices at the beginning of this century, triggering large underfundings in pension plans, raised the question as to whether stocks can really help mend the asset and liability mismatch. To understand some aspects of this topical issue, we examine whether existing major equity indexes can close this gap, given the liability profile of a typical pension fund. We also compare the non-market capitalization weighted equity indexes recently introduced as Research Affiliates Fundamental Indexes (R) (RAFI (R)) with traditional market capitalization weighted equity indexes from an asset and liability management perspective. The analysis of the behavior of the solvency ratio clearly indicates that interest rate sensitive stocks have a large potential to improve the link between assets and liabilities. Compared with market capitalization weighted equity indexes, RAFI (R) shows a substantially better potential to mend the asset and liability mismatch, while also improving returns.

  • 130. Doll, Jim
    et al.
    Dupuis, Paul
    Nyquist, Pierre
    A large deviation analysis of certain qualitative properties of parallel tempering and infinite swapping algorithms2018In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 78, no 1, p. 103-144Article in journal (Refereed)
    Abstract [en]

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Using the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.

  • 131. Doll, Jim
    et al.
    Dupuis, Paul
    Nyquist, Pierre
    Thermodynamic integration methods, infinite swapping and the calculation of generalized averages2017In: Journal of Chemical Physics, ISSN 0021-9606, E-ISSN 1089-7690, Vol. 146Article in journal (Refereed)
    Abstract [en]

    In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.

  • 132. Doorn, N.
    et al.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
    Design for the value of safety2015In: Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, Springer Netherlands, 2015, p. 491-511Chapter in book (Other academic)
    Abstract [en]

    Two major methods for achieving safety in engineering design are compared: safety engineering and probabilistic risk analysis. Safety engineering employs simple design principles or rules of thumb such as inherent safety, multiple barriers, and numerical safety margins to reduce the risk of accidents. Probabilistic risk analysis combines the probabilities of individual events in event chains leading to accidents in order to identify design elements in need of improvement and often also to optimize the use of resources. It is proposed that the two methodologies should be seen as complementary rather than as competitors. Probabilistic risk analysis is at its advantage when meaningful probability estimates are available for most of the major events that may contribute to an accident. Safety engineering principles are more suitable to deal with uncertainties that defy quantification. In many design tasks, the combined use of both methodologies is preferable.

  • 133. Douc, Randal
    et al.
    Moulines, Eric
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Long-term stability of sequential monte carlo methods under verifiable conditions2014In: The Annals of Applied Probability, ISSN 1050-5164, E-ISSN 2168-8737, Vol. 24, no 5, p. 1767-1802Article in journal (Refereed)
    Abstract [en]

    This paper discusses particle filtering in general hidden Markov models (HMMs) and presents novel theoretical results on the long-term stability of bootstrap-type particle filters. More specifically, we establish that the asymptotic variance of the Monte Carlo estimates produced by the bootstrap filter is uniformly bounded in time. On the contrary to most previous results of this type, which in general presuppose that the state space of the hidden state process is compact (an assumption that is rarely satisfied in practice), our very mild assumptions are satisfied for a large class of HMMs with possibly non-compact state space. In addition, we derive a similar time uniform bound on the asymptotic L-p error. Importantly, our results hold for misspecified models; that is, we do not at all assume that the data entering into the particle filter originate from the model governing the dynamics of the particles or not even from an HMM.

  • 134. Douc, Randal
    et al.
    Moulines, Eric
    Rydén, Tobias
    Lund University.
    Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime2004In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 32, no 5, p. 2254-2304Article in journal (Refereed)
    Abstract [en]

    An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.

  • 135.
    Drugge, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Allocation Methods for Alternative Risk Premia Strategies2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We use regime switching and regression tree methods to evaluate performance in the risk premia strategies provided by Deutsche Bank and constructed from U.S. research data from the Fama French library. The regime switching method uses the Baum-Welch algorithm at its core and splits return data into a normal and a turbulent regime. Each regime is independently evaluated for risk and the estimates are then weighted together according to the expected value of the proceeding regime. The regression tree methods identify macro-economic states in which the risk premia perform well or poorly and use these results to allocate between risk premia strategies. The regime switching method proves to be mostly unimpressive but has its results boosted by investing less into risky assets as the probability of an upcoming turbulent regime becomes larger. This proves to be highly effective for all time periods and for both data sources. The regression tree method proves the most effective when making the assumption that we know all macro-economic data the same month as it is valid for. Since this is an unrealistic assumption the best method seems to be to evaluate the performance of the risk premia strategy using macro-economic data from the previous quarter.

  • 136.
    Dufour Partanen, Bianca
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the Valuation of Contingent Convertibles (CoCos): Analytically Tractable First Passage Time Model for Pricing AT1 CoCos2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Contingent Convertibles (CoCos) are a new type of hybrid debt instrument characterized by forced equity conversion or write-down under a specified trigger event, usually indicating a state of near non-viability of the Additional Tier 1 capital category, giving them additional features such as possible coupon cancellation. In this thesis, the structure of CoCos is presented and different pricing approaches are introduced. A special focus is put on structural models with the Analytically Tractable First Passage Time(AT1P) Model and its extensions. Two models are applied on the write-down CoCo issued by Svenska Handelsbanken, starting with the equity derivative model and followed by the AT1P model.

  • 137.
    Edberg, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Prediktering av VD-löner i svenska onoterade aktiebolag2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    The CEO’s remuneration is in contradiction to the union labours, set individually and independent from union agreements. The company board determines the remuneration. It’s based on an estimated valuation of variables such as job characteristics, personal qualities of the CEO, market valuation of similar tasks and the availability of possible candidates.

    The purpose of this thesis is to create a model to predict the market remuneration for a current or forthcoming CEO. Further, the compensation structure will be examined, aiming to find the compensation structure that maximizes the CEO’s performance.

    This thesis showes that it is possible to predict the CEO remuneration for employed CEOs in unlisted corporations with 64-percentage explanation rate. The variance is explained by six covariates, four covariates representing job characteristics and two related to company performance. The highest explanation rate is given by the covariate turnover, which explains just below 40-percentage of the remuneration variance.

    This study shows that the optimal compensation structure is different for different

    companies. Further, recommendations for what the variable remuneration should be

    based on, in order to maximize the CEO’s performance.

  • 138.
    Ekeberg, Magnus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detecting contacts in protein folds by solving the inverse Potts problem - a pseudolikelihood approach2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract 

    Spatially proximate amino acid positions in a protein tend to co-evolve, so a protein's 3D-structure leaves an echo of correlations in the evolutionary record. Reverse engineering 3D-structures from such correlations is an open problem in structural biology, pursued with increasing vigor as new protein sequences continue to fill the data banks. Within this task lies a statistical stumbling block, rooted in the following: correlation between two amino acid positions can arise from firsthand interaction, but also be network-propagated via intermediate positions; observed correlation is not enough to guarantee proximity. The remedy, and the focus of this thesis, is to mathematically untangle the crisscross of correlations and extract direct interactions, which enables a clean depiction of co-evolution among the positions.

    Recently, analysts have used maximum-entropy modeling to recast this cause-and-effect puzzle as parameter learning in a Potts model (a kind of Markov random field). Unfortunately, a computationally expensive partition function puts this out of reach of straightforward maximum-likelihood estimation. Mean-field approximations have been used, but an arsenal of other approximate schemes exists. In this work, we re-implement an existing contact-detection procedure and replace its mean-field calculations with pseudo-likelihood maximization. We then feed both routines real protein data and highlight differences between their respective outputs. Our new program seems to offer a systematic boost in detection accuracy.

  • 139.
    El Menouni, Zakaria
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing Interest Rate Derivatives in the Multi-Curve Framework with a Stochastic Basis2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The financial crisis of 2007/2008 has brought about a lot of changes in the interest rate market in particular, as it has forced to review and modify the former pricing procedures and methodologies. As a consequence, the Multi-Curve framework has been adopted to deal with the inconsistencies of the frameworks used so far, namely the single-curve method.

    We propose to study this new framework in details by focusing on a set of interest rate derivatives such as deposits, swaps and caplets, then we explore a stochastic approach to model the Libor-OIS basis spread, which has appeared since the beginning of the crisis and is now the quantity of interest to which a lot of researchers dedicate their work (F.Mercurio, M.Bianchetti and others).

    A discussion follows this study to set the light on the challenges and difficulties related to the modeling of basis spread.

     

  • 140.
    Eliasson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Game contingent claims2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    Game contingent claims (GCCs), as introduced by Kifer (2000), are a generalization of American contingent claims where the writer has the opportunity to terminate the contract, and must then pay the intrinsic option value plus a penalty. In complete markets, GCCs are priced using no-arbitrage arguments as the value of a zero-sum stochastic game of the type described in Dynkin (1969). In incomplete markets, the neutral pricing approach of

    Kallsen and Kühn (2004) can be used.

    In Part I of this thesis, we introduce GCCs and their pricing, and also cover some basics of mathematical finance. In Part II, we present a new algorithm for valuing game contingent claims. This algorithm generalises the least-squares Monte-Carlo method for pricing American options of Longstaff and Schwartz (2001). Convergence proofs are obtained, and the algorithm is tested against certain GCCs. A more efficient algorithm is derived from the first one using the computational complexity analysis technique of Chen and Shen (2003).

    The algorithms were found to give good results with reasonable time requirements. Reference implementations of both algorithms are available for download from the author’s Github page https://github.com/del/ Game-option-valuation-library

  • 141.
    Engman, Kristofer
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bidding models for bond market auctions2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this study, we explore models for optimal bidding in auctions on the bond market using data gathered from the Bloomberg Fixed Income Trading platform and MIFID II reporting. We define models that aim to fulfill two purposes. The first is to hit the best competitor price, such that a dealer can win the trade with the lowest possible margin. This model should also take into account the phenomenon of the Winner's Curse, which states that the winner of a common value auction tends to be the bidder who overestimated the value. We want to avoid this since setting a too aggressive bid could be unprofitable even when the dealer wins. The second aim is to define a model that estimates a quote that allows the dealer to win a certain target ratio of trades. We define three novel models for these purposes that are based on the best competitor prices for each trade, modeled by a Skew Exponential Power distribution. Further, we define a proxy for the Winner's Curse, represented by the distance of the estimated price from a reference price for the trade calculated by Bloomberg which is available when the request for quote (RFQ) arrives. Relevant covariates for the trades are also included in the models to increase the specificity for each trade. The novel models are compared to a linear regression and a random forest regression method using the same covariates.

    When trying to hit the best competitor price, the regression models have approximately equal performance to the expected price method defined in the study. However, when incorporating the Winner's Curse proxy, our Winner's Curse adjusted models are able to reduce the effect of the Winner's Curse as we define it, which the regression methods cannot. The results of the models for hitting a target ratio show that the actual hit ratio falls within an interval of 5% of the desired target ratio when running the model on the test data. The inclusion of covariates in the models does not impact the results as much as expected, but still provide improvements with respect to some measures. In summary, the novel methods show promise as a first step towards building algorithmic trading for bonds, but more research is needed and should incorporate more of the growing data set of RFQs and MIFID II recorded transaction prices.

  • 142.
    Engsner, Hampus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A PIT - Based approach to Validation of Electricity Spot Price Models2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The modeling of electricity spot prices is still in its early stages, with various different competing models being proposed by different researchers. This makes model evaluation and comparison research an important area, for practitioners and researchers alike. However, there is a distinct lack in the literature of consensus regarding model evaluation tools to assess model validity, with different researchers using different methods of varying suitability as validation methods. In this thesis the current landscape of electricity spot price models and how they are currently evaluated is mapped out. Then, as the main contribution this research aims to make, a general and flexible framework for model validation is proposed, based on the Probability Integral Transform (PIT). The probability integral transform, which can be seen as a generalization of analyzing residuals in simple time series and regression models, transforms the realizations of a time series into independent and identically distributed U(0,1) variables using the conditional distributions of the time series. Testing model validity is with this method reduced to testing if the PIT values are independent and identically distributed U(0,1) variables. The thesis is concluded by testing spot price models of varying validity according to previous research using this framework against actual spot price data. These empirical tests suggest that PIT-based model testing does indeed point us toward the more suitable models, with especially unsuitable models being rejected by a large margin.

  • 143.
    Engström, Alva
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Frithz, Filippa
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Measuring the impact of strategic and tactic allocation for managed futures portfolios2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The optimal asset allocation is an ever current matter for investment managers. This thesis aims to investigate the impact of risk parity and target volatility on the Sharpe ratio of a portfolio consisting of futures contracts on equity indices and bonds during the period 2000-2018. In addition, this thesis examines on which level - instrument, asset class or total portfolio level - a momentum strategy has the largest effect. This is done by applying design of experiments. The final result in this thesis finds that risk parity and target volatility improve the Sharpe ratio compared to a classic 60/40 capital allocation. Furthermore, utilising momentum strategies is the most beneficial on the asset class level, i.e. to allocate between equitiy indices and bond futures.

  • 144.
    Eriksson, André
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Anomaly Detection inMachine-Generated Data:A Structured Approach2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Anomaly detection is an important issue in data mining and analysis, with applications in almost every area in science, technology and business that involves data collection. The development of general anomaly detection techniques can therefore have a large impact on data analysis across many domains. In spite of this, little work has been done to consolidate the different approaches to the subject.

    In this report, this deficiency is addressed in the target domain of temporal machine-generated data. To this end, new theory for comparing and reasoning about anomaly detection tasks and methods is introduced, which facilitates a problem-oriented rather than a method-oriented approach to the subject. Using this theory as a basis, the possible approaches to anomaly detection in the target domain are discussed, and a set of interesting anomaly detection tasks is highlighted.

    One of these tasks is selected for further study: the detection of subsequences that are anomalous with regards to their context within long univariate real-valued sequences. A framework for relating methods derived from this task is developed, and is used to derive new methods and an algorithm for solving a large class of derived problems. Finally, a software implementation of this framework along with a set of evaluation utilities is discussed and demonstrated

  • 145. Eriksson, Kimmo
    et al.
    Jansson, Fredrik
    Sjöstrand, Jonas
    Stockholms universitet.
    Bentley's conjecture on popularity toplist turnover under random copying2010In: The Ramanujan journal, ISSN 1382-4090, E-ISSN 1572-9303, Vol. 23, p. 371-396Article in journal (Refereed)
    Abstract [en]

    Bentley et al studied the turnover rate in popularity toplists in a ’random copying’ model of cultural evolution. Based on simulations of a model with population size N, list length ℓ and invention rate μ, they conjectured a remarkably simple formula for the turnover rate: ℓ√μ. Here we study an overlapping generations version of the random copying model, which can be interpreted as a random walk on the integer partitions of the population size. In this model we show that the conjectured formula, after a slight correction, holds asymptotically.

  • 146.
    Eriksson, Kimmo
    et al.
    Mälardalen University, School of Education, Culture and Communication.
    Sjöstrand, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Limiting shapes of birth-and-death processes on Young diagrams2012In: Advances in Applied Mathematics, ISSN 0196-8858, E-ISSN 1090-2074, Vol. 48, no 4, p. 575-602Article in journal (Refereed)
    Abstract [en]

    We consider a family of birth processes and birth-and-death processes on Young diagrams of integer partitions of n. This family incorporates three famous models from very different fields: Rost's totally asymmetric particle model (in discrete time), Simon's urban growth model, and Moran's infinite alleles model. We study stationary distributions and limit shapes as n tends to infinity, and present a number of results and conjectures.

  • 147.
    Ernstsson, Hampus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Börjes Liljesvan, Max
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Multiples for Valuation Estimates of Life Science Companies in Sweden2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Market multiples are a common and simple tool for estimation of corporate value. It can express temporal dynamics and differences in markets, industries and firms. Despite their practical usefulness, some critical problems remains which continue to be debated. This thesis investigates if there exists characteristics for explaining market capitalization by market multiples within the life science industry in Sweden. The approach follows well known theory of multiple linear regression analysis. The results indicated only a linear relationship between the market cap and the R\&D expenditures of a company. This does not mean that the other explanatory variables does not have effect on market cap only that there is no linear relationship that could be statistically proven.

  • 148.
    Ewertzh, Jacob
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bankruptcy Distributions and Modelling for Swedish Companies Using Logistic Regression2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis discusses the concept of bankruptcy, or default, for Swedish companies. The actual distribution over time is considered both on aggregate level and within different industries. Several models are constructed to best possible describe the default frequency. Mainly logistic regression models are designed for this purpose, but various other models are considered. Some of these are constructed for comparison and for the ambition to produce the most accurate model possible. A large data set of nearly 30 million quarterly observations is used in the analysis. Taking into account micro and macro economic data. The derived models cover different time periods, considering different variables and display varying levels of accuracy. The most exact model is a logistic regression model considering both micro and macro data. It is tested both in sample and out of sample and perform very well in both areas. This model is estimated on first a subset of the data set to be able to compare with a real scenario. Then an equivalent model is constructed from the whole data set to best possibly describe future scenarios. Here Vector Auto-Regressive (VAR) models, and empirical models constructed by OLS regression estimating the firm values, are used in combination with the logistic regression model to predict the future. All three models are used to describe the most likely scenarios, as well as the worst case scenarios. From the worst case scenarios risk measures, such as the empirical value at risk, can be derived. From all this analysis the most significant results are compiled. Namely, that the Logistic regression model performs remarkably well both in-sample and out-of-sample, if macro variables are taken into account. Further, the future results are harder to interpret. Yet, the analysis has arguments for prediction accuracy and interesting results of a continued low default frequency within the next year.

  • 149. Ezquiaga, J. M.
    et al.
    Zumalacárregui, Miguel
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Dark Energy after GW170817: Dead Ends and the Road Ahead2017In: Physical Review Letters, ISSN 0031-9007, E-ISSN 1079-7114, Vol. 119, no 25, article id 251304Article in journal (Refereed)
    Abstract [en]

    Multimessenger gravitational-wave (GW) astronomy has commenced with the detection of the binary neutron star merger GW170817 and its associated electromagnetic counterparts. The almost coincident observation of both signals places an exquisite bound on the GW speed |cg/c-1|≤5×10-16. We use this result to probe the nature of dark energy (DE), showing that a large class of scalar-tensor theories and DE models are highly disfavored. As an example we consider the covariant Galileon, a cosmologically viable, well motivated gravity theory which predicts a variable GW speed at low redshift. Our results eliminate any late-universe application of these models, as well as their Horndeski and most of their beyond Horndeski generalizations. Three alternatives (and their combinations) emerge as the only possible scalar-tensor DE models: (1) restricting Horndeski's action to its simplest terms, (2) applying a conformal transformation which preserves the causal structure, and (3) compensating the different terms that modify the GW speed (to be robust, the compensation has to be independent on the background on which GWs propagate). Our conclusions extend to any other gravity theory predicting varying cg such as Einstein-Aether, Hořava gravity, Generalized Proca, tensor-vector-scalar gravity (TEVES), and other MOND-like gravities.

  • 150.
    Foa', Alessandro
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Object Detection in Object Tracking System for Mobile Robot Application2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work takes place at the Emerging Technologies department of Volvo Construction Equipment(CE), in the context of a larger project which involves several students. The focus is a mobile robot built by Volvo for testing some AI features such as Decision Making, Natural Language Processing, Speech Recognition, Object Detection. This thesis will focus on the latter. During last 5 years researchers have built very powerful deep learning object detectors in terms of accuracy and speed. This has been possible thanks to the remarkable development of Convolutional Neural Networks as feature extractors for Image Classification. The purpose of the report is to give a broad view over the state-of-the-art literature of Object Detection, in order to choose the best detector for the robot application Volvo CE is working with, considering that the robot's real-time performance is a priority goal of the project. After comparing the different methods, YOLOv3 seems to be the best choice. Such framework will be implemented in Python and integrated with an object tracking system which returns the 3D position of the objects of interest. The result of the whole system will be evaluated in terms of speed and precision of the resulting detection of the objects.

1234567 101 - 150 of 464
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf