1 - 11 of 11
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Yao, Yuan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Power and Performance Optimization for Network-on-Chip based Many-Core Processors2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Network-on-Chip (NoC) is emerging as a critical shared architecture for CMPs (Chip Multi-/Many-Core Processors) running parallel and concurrent applications. As the core count scales up and the transistor size shrinks, how to optimize power and performance for NoC open new research challenges.

    As it can potentially consume 20--40\% of the entire chip power, NoC power efficiency has emerged as one of the main design constraints in today's and future high performance CMPs. For NoC power management, we propose a novel on-chip DVFS technique that is able to adjust per-region NoC V/F according to voted V/F levels from communicating threads. A thread periodically votes for a preferred NoC V/F level that best suits its individual performance interests. The final DVFS decision of each region is adjusted by a region DVFS controller democratically based on the majority of votes it receives.

    Mutually exclusive locks are pervasive shared memory synchronization primitives. In advanced locks such as the Linux queue spinlock comprising a low-overhead spinning phase and a high-overhead sleeping phase, we show that the lock primitive may create very high competition overhead (COH), which is the time threads compete with each other for the next critical section grant. For performance enhancement, we propose a software-hardware cooperative mechanism that can opportunistically maximize the chance of a thread winning critical section in the low-overhead spinning phase and minimize the chance of winning critical section in the high-overhead sleeping phase, so that COH is significantly reduced. Besides, we further observe that the cache invalidation-acknowledgement round-trip delay between the home node storing the critical section lock and the cores running competing locks can heavily downgrade application performance. To reduce such high lock coherence overhead (LCO), we propose in-network packet generation (iNPG) to turn passive ``normal'' NoC routers into active ``big'' ones that can not only transmit but also generate packets to perform early invalidation and collect inv-acks. iNPG effectively shortens the protocol round-trip delay and thus largely reduces LCO in various locking primitives.

    To enhance performance fairness when running multiple multi-threaded programs on a single CMP, we develop the concept of aggregate flow which refers to a sequence of associated data and cache coherence flows issued from the same thread. Based on the aggregate flow concept, we propose three coherent mechanisms to efficiently achieve performance isolation: rate profiling, rate inheritance and flow arbitration. Rate profiling dynamically characterizes thread performance and communication needs. Rate inheritance allows a data or coherence reply flow to inherit the characteristics of its associated data or coherency request flow, so that consistent bandwidth allocation policy is applied to all sub-flows of the same aggregate flow. Flow arbitration uses a proven scheduling policy, self-clocked fair queueing (SCFQ), to achieve rate-proportional arbitration for different aggregate flows. Our approach successfully achieves balanced performance isolations with different mixtures of applications.

  • Public defence: 2019-08-23 10:00 F3, Stockholm
    Töpfer, Fritzi
    KTH, School of Electrical Engineering and Computer Science (EECS), Micro and Nanosystems.
    Micromachined Microwave Sensors for Non-Invasive Skin Cancer Diagnostics2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Malignant melanoma is one of the cancers with the highest incident rates. It is also the most dangerous skin cancer type and an early diagnosis is crucial for the successful treatment of malignant melanoma patients. If it is diagnosed and treated at an early stage, the survival rate for patients is 99%, however, this is reduced to only 25% if diagnosed at a later stage. The work in this thesis combines microsystem technology, microwave engineering and biomedical engineering to develop a sensing tool for early-stage malignant melanoma diagnostics. Such a tool could not only increase the clinical accuracy of malignant melanoma diagnosis, but also reduce the time needed for examination, and lower the number of unnecessary biopsies. Furthermore, a reliable and easy-to-use tool can enable non-specialist healthcare personnel, including primary care physicians or nurses, to perform a prescreening for malignant melanoma with a high sensitivity. Consequently, a large number of patients could receive a timely examination despite the shortage of dermatologists, which exists in many healthcare systems. The dielectric properties of tumor tissue differ from healthy tissue, which is mainly accounted to a difference in the water content. This difference can be measured by a microwave-based sensing technique called microwave reflectometry. Previously reported microwave-based skin measurements largely relied on standard open-ended waveguide probes that are not suitable for early-stage skin tumor diagnosis. Thus, alternative near-field probe designs based on micromachined dielectric-rod waveguides are presented here. The thesis focuses on a broadband microwave probe that operates in the W-band (75 to 110 GHz), with a sensing depth and resolution tailored to small and shallow skin tumors, allowing a high sensitivity to early-stage malignant melanoma. Prototypes of the probe were fabricated by micromachining and characterized. For the characterization, a novel type of silicon-based heterogeneous sample with tailor-made permittivity was introduced. Furthermore, the performance of the probe was evaluated in vivo. First, through measurements on human volunteers, it was shown that the probe is sensitive to artificially induced changes of the skin hydration. Then, measurements on murine skin melanoma models were performed and small early-stage skin tumors were successfully distinguished from healthy skin. Additionally, a resonant probe for microwave skin sensing was designed and micromachined protoypes were tested on phantom materials. However, the resonant probe was found less suitable than the broadband probe for the measurements on skin. The broadband probe presented in this thesis is the first microwave nearfield probe specifically designed for early-stage malignant melanoma diagnostics and successfully evaluated in vivo.

  • Public defence: 2019-08-29 13:15 F3, Stockholm
    Vu, Minh Thành
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Perspectives on Identification Systems2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Identification systems such as biometric identification systems have been becoming ubiquitous. Fundamental bounds on the performance of the systems have been established in literature. In this thesis we further relax several assumptions in the identification problem and derive the corresponding fundamental regions for these settings.

    The generic identification architecture is first extended so that users’ information is stored in two layers. Additionally, the processing is separated in two steps where the observation sequence in the first step is a noisy, pre-processed version of the original one. This setting generalizes several known settings in the literature. Given fixed pre-processing schemes, we study optimal trade-offs in the discrete and Gaussian cases. As corollaries we also provide characterizations for related problems.

    In a second aspect, the joint distribution in the identification problem is relaxed in several ways. We first assume that all users’ sequences are drawn from a common distribution, which depends on a state of the system. The observation sequence is induced by a channel which has its own state. Another variant, in which the channel is fixed, however the distributions of users’ sequences are not necessarily identical, is considered next. We then study the case that users’ data sequence are generated independently from a mixture distribution. Optimal performance regions of these settings are provided. We further give an inner bound and an outer bound on the region when the observation channel varies arbitrarily. Additionally, we strengthen the relation between the Wyner-Ahlswede-Körner problem and the identification problem and show the equivalence of these two.

    Finally, we study a binary hypothesis testing problem which decides whether or not the observation sequence is related to one user in the database. The optimal exponent of the second type of error is studied. Furthermore, we show that the single-user testing against independence problem studied by Ahlswede and Csiszár is equivalent to the identification problem as well as the Wyner-Ahlswede-Körner problem.

  • Public defence: 2019-08-30 10:00 F3, Stockholm
    Henschen, Jonatan
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology.
    Bio-based preparation of nanocellulose and functionalization using polyelectrolytes2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Nanocellulose is a material which can be extracted from wood, and in recent years it has received great attention for its interesting properties and wide range of possible applications. With the aim of further expanding the applications of nanocellulose, this work has studied a new way to produce nanocellulose as well as the possibility of using polyelectrolyte adsorption to alter the interaction with bacteria of materials made from nanocellulose.

    Nanocellulose was produced by a novel concurrent esterification and hydrolysis of wood pulp in molten oxalic acid dihydrate. The resulting mixture was washed using ethanol, acetone or tetrahydrofuran before the cellulose oxalate was dried and fibrillated. The nanocellulose obtained with a high yield had a high surface charge (up to 1.4 mmol g-1) and contained particles with a morphology similar to both cellulose nanocrystals and cellulose nanofibrils. The material was used to prepare both Pickering emulsions and thin films with a strength of up to 197 MPa, a strain at break of up to 5 %, a modulus of up to 10.6 GPa and an oxygen permeability as low as 0.31 cm3 µm m-2 day-1 kPa-1.

    Polyelectrolyte adsorption of polyvinylamine and polyacrylic acid was used to modify materials made from nanocellulose. Materials in the form of films and aerogels were used as substrates. By altering the surface charge of the material, the surface structure and the number of layers of polyvinylamine/polyacrylic acid adsorbed, it was possible to prepare materials with both high and low bacterial adhesion. By changing the material properties it is possible to tailor materials with either contact-active or non-adhesive antibacterial properties, both of which are sustainable alternatives to the currently used antibacterial materials.

    Nanocellulose is a material which in the near future will probably be used in many applications. In order to improve the suitability of nanocellulose in certain applications it will be necessary to use production methods which differ from the existing methods, for example by using oxalation as a pre-treatment. By modifying the bacterial adhesion to materials prepared from nanocellulose, new medical and health applications emerge.

  • Public defence: 2019-08-30 13:00 Sal C, Kungl Tekniska högskolan, Stockholm
    Chaourani, Panagiotis
    KTH, School of Electrical Engineering and Computer Science (EECS), Electronics, Integrated devices and circuits.
    Sequential 3D Integration - Design Methodologies and Circuit Techniques2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Sequential 3D (S3D) integration has been identified as a potential candidate for area efficient ICs. It entails the sequential processing of tiers of devices, one on top the other. The sequential nature of this processing allows the inter-tier vias to be processed like any other inter-metal vias, resulting in an unprecedented increase in the density of vertical interconnects. A lot of scientific attention has been directed towards the processing aspects of this 3-D integration approach, and in particular producing high-performance top-tier transistors without damaging the bottom tier devices and interconnects.As far as the applications of S3D integration are concerned, a lot of focus has been placed on digital circuits. However, the advent of Internet-of-Things applications has motivated the investigation of other circuits as well.

    As a first step, two S3D design platforms for custom ICs have been developed, one to facilitate the development of the in-house S3D process and the other to enable the exploration of S3D applications. Both contain device models and physical verification scripts. A novel parasitic extraction flow for S3D ICs has been also developed for the study of tier-to-tier parasitic coupling.

    The potential of S3D RF/AMS circuits has been explored and identified using these design platforms. A frequency-based partition scheme has been proposed, with high frequency blocks placed in the top-tier and low-frequency ones in the bottom. As a proof of concept, a receiver front-end for the ZigBee standard has been designed and a 35% area reduction with no performance trade-offs has been demonstrated.

    To highlight the prospects of S3D RF/AMS circuits, a study of S3D inductors has been carried out. Planar coils have been identified as the most optimal configuration for S3D inductors and ways to improve their quality factors have been explored. Furthermore, a set of guidelines has been proposed to allow the placement of bottom tier blocks under top-tier inductors towards very compact S3D integration. These guidelines take into consideration the operating frequencies and type of components placed in the bottom tier.

    Lastly, the prospects of S3D heterogeneous integration for circuit design have been analyzed with the focus lying on a Ge-over-Si approach. Based on the results of this analysis, track-and-hold circuits and digital cells have been identified as potential circuits that could benefit the most from a Ge-over-Si S3D integration scheme, thanks to the low on-resistance of Ge transistors in the triode region. To improve the performance of top-tier Ge transistors, a processing flow that enables the control of their back-gates has been also proposed, which allows controlling the threshold voltage of top-tier transistors a truntime.

  • Public defence: 2019-09-06 10:00 Kollegiesalen, Stockholm
    Aljure, Mauricio
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Pre-breakdown Phenomena in Mineral Oil Based Nanofluids2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Mineral oil is a dielectric liquid commonly used in high voltage equipment such as power transformers. Interestingly, it has been experimentally observed that the dielectric strength of the mineral oil is improved when nanoparticles are added. However, the mechanisms behind these improvements are not well understood, hindering the further innovation process of these so-called nanofluids. This thesis aims to contribute to the understanding of the mechanisms explaining the dielectric strength improvement of the base oil when nanoparticles are added.For this, several experiments and numerical simulations are performed in this thesis. The initiation voltage of electric discharges infive different kind of nanofluids was measured. The large data set obtained allowed to cast experimental evidence on the existing hypotheses that are used to explain the effect of nanoparticles. It is found that hydrophilic nanoparticles hinder the electric discharge initiation from anode electrodes. On the other hand, electric discharge initiation from cathode electrodes was hindered by nanoparticles with low charge relaxation time.The electric currents in mineral oil and nanofluids were also measured under intense electric fields (up to 2GV/m). It is found that the addition of certain nanoparticles increases the measured currents. The possible physical mechanisms explaining the measured currents inmineral oil with and without nanoparticles were thoroughly discussed based on results of numerical simulations. Preliminary parameters used in this thesis to model these mechanisms led to a good agreement between the measured and simulated electric currents.

  • Public defence: 2019-09-06 10:00 F3, Stockholm
    Abdalmoaty, Mohamed
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control. KTH Royal Institute of Technology.
    Identification of Stochastic Nonlinear Dynamical Models Using Estimating Functions2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Data-driven modeling of stochastic nonlinear systems is recognized as a very challenging problem, even when reduced to a parameter estimation problem. A main difficulty is the intractability of the likelihood function, which renders favored estimation methods, such as the maximum likelihood method, analytically intractable. During the last decade, several numerical methods have been developed to approximately solve the maximum likelihood problem. A class of algorithms that attracted considerable attention is based on sequential Monte Carlo algorithms (also known as particle filters/smoothers) and particle Markov chain Monte Carlo algorithms. These algorithms were able to obtain impressive results on several challenging benchmark problems; however, their application is so far limited to cases where fundamental limitations, such as the sample impoverishment and path degeneracy problems, can be avoided.

    This thesis introduces relatively simple alternative parameter estimation methods that may be used for fairly general stochastic nonlinear dynamical models. They are based on one-step-ahead predictors that are linear in the observed outputs and do not require the computations of the likelihood function. Therefore, the resulting estimators are relatively easy to compute and may be highly competitive in this regard: they are in fact defined by analytically tractable objective functions in several relevant cases. In cases where the predictors are analytically intractable due to the complexity of the model, it is possible to resort to {plain} Monte Carlo approximations. Under certain assumptions on the data and some conditions on the model, the convergence and consistency of the estimators can be established. Several numerical simulation examples and a recent real-data benchmark problem demonstrate a good performance of the proposed method, in several cases that are considered challenging, with a considerable reduction in computational time in comparison with state-of-the-art sequential Monte Carlo implementations of the ML estimator.

    Moreover, we provide some insight into the asymptotic properties of the proposed methods. We show that the accuracy of the estimators depends on the model parameterization and the shape of the unknown distribution of the outputs (via the third and fourth moments). In particular, it is shown that when the model is non-Gaussian, a prediction error method based on the Gaussian assumption is not necessarily more accurate than one based on an optimally weighted parameter-independent quadratic norm. Therefore, it is generally not obvious which method should be used. This result comes in contrast to a current belief in some of the literature on the subject. 

    Furthermore, we introduce the estimating functions approach, which was mainly developed in the statistics literature, as a generalization of the maximum likelihood and prediction error methods. We show how it may be used to systematically define optimal estimators, within a predefined class, using only a partial specification of the probabilistic model. Unless the model is Gaussian, this leads to estimators that are asymptotically uniformly more accurate than linear prediction error methods when quadratic criteria are used. Convergence and consistency are established under standard regularity and identifiability assumptions akin to those of prediction error methods.

    Finally, we consider the problem of closed-loop identification when the system is stochastic and nonlinear. A couple of scenarios given by the assumptions on the disturbances, the measurement noise and the knowledge of the feedback mechanism are considered. They include a challenging case where the feedback mechanism is completely unknown to the user. Our methods can be regarded as generalizations of some classical closed-loop identification approaches for the linear time-invariant case. We provide an asymptotic analysis of the methods, and demonstrate their properties in a simulation example.

  • Public defence: 2019-09-12 13:00 F3, Stockholm
    Bütepage, Judith
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Generative models for action generation and action understanding2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The question of how to build intelligent machines raises the question of how to rep-resent the world to enable intelligent behavior. In nature, this representation relies onthe interplay between an organism’s sensory input and motor input. Action-perceptionloops allow many complex behaviors to arise naturally. In this work, we take these sen-sorimotor contingencies as an inspiration to build robot systems that can autonomouslyinteract with their environment and with humans. The goal is to pave the way for robotsystems that can learn motor control in an unsupervised fashion and relate their ownsensorimotor experience to observed human actions. By combining action generationand action understanding we hope to facilitate smooth and intuitive interaction betweenrobots and humans in shared work spaces.To model robot sensorimotor contingencies and human behavior we employ gen-erative models. Since generative models represent a joint distribution over relevantvariables, they are flexible enough to cover the range of tasks that we are tacklinghere. Generative models can represent variables that originate from multiple modali-ties, model temporal dynamics, incorporate latent variables and represent uncertaintyover any variable - all of which are features required to model sensorimotor contin-gencies. By using generative models, we can predict the temporal development of thevariables in the future, which is important for intelligent action selection.We present two lines of work. Firstly, we will focus on unsupervised learning ofmotor control with help of sensorimotor contingencies. Based on Gaussian Processforward models we demonstrate how the robot can execute goal-directed actions withthe help of planning techniques or reinforcement learning. Secondly, we present anumber of approaches to model human activity, ranging from pure unsupervised mo-tion prediction to including semantic action and affordance labels. Here we employdeep generative models, namely Variational Autoencoders, to model the 3D skeletalpose of humans over time and, if required, include semantic information. These twolines of work are then combined to implement physical human-robot interaction tasks.Our experiments focus on real-time applications, both when it comes to robot ex-periments and human activity modeling. Since many real-world scenarios do not haveaccess to high-end sensors, we require our models to cope with uncertainty. Additionalrequirements are data-efficient learning, because of the wear and tear of the robot andhuman involvement, online employability and operation under safety and complianceconstraints. We demonstrate how generative models of sensorimotor contingencies canhandle these requirements in our experiments satisfyingly.

  • Public defence: 2019-09-13 13:00 Sal B, Kista
    Sollami Delekta, Szymon
    KTH, School of Electrical Engineering and Computer Science (EECS), Electronics.
    Inkjet Printing of Graphene-based Microsupercapacitors for Miniaturized Energy Storage Applications2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Printing technologies are becoming increasingly popular because they enable the large-scale and low-cost production of functional devices with various designs, functions, mechanical properties and materials. Among these technologies, inkjet printing is promising thanks to its direct (mask-free) patterning, non-contact nature, low material waste, resolution down to 10 µm, and compatibility with a broad range of materials and substrates. As a result, inkjet printing has applications in several fields like wearables, opto-electronics, thin-film transistors, displays, photovoltaic devices, and in energy storage. It's in energy storage that the technique shows its full potential by allowing the production of miniaturized devices with a compact form factor, high power density and long cycle life, called microsupercapacitors (MSCs). To this end, graphene has a number of remarkable properties like high electrical conductivity, large surface area, elasticity and transparency, making it a top candidate as an electrode material for MSCs.

    Some key drawbacks limit the use of inkjet printing for the production of graphene-based MSCs. This thesis aims at improving its scalability by producing fully inkjet printed devices, and extending its applications through the integration of inkjet printing with other fabrication techniques.

    MSCs typically rely on the deposition by hand of gel electrolyte that is not printable or by submerging the whole structure into liquid electrolyte. Because of this, so far large-scale production of more than 10 interconnected devices has not been attempted. In this thesis, a printable gel electrolyte ink based on poly(4-styrene sulfonic acid) was developed, allowing the production of large arrays of more than 100 fully inkjet printed devices connected in series and parallel that can be reliably charged up to 12 V. Also, a second electrolyte ink based on nano-graphene oxide, a solid-state material with high ionic conductivity, was formulated to optimize the volumetric performance of these devices. The resulting MSCs were also fully inkjet printed and exhibited an overall device thickness of around 1 µm, yielding a power density of 80 mW cm-3.

    Next, the use of inkjet printing of graphene was explored for the fabrication of transparent MSCs. This application is typically hindered by the so-called coffee-ring effect, which creates dark deposits on the edges of the drying patterns and depletes material from the inside area. In light of this issue, inkjet printing was combined with etching to remove the dark deposits thus leaving uniform and thin films of graphene with vertical sidewalls. The resulting devices showed a transmittance of up to 90%.

    Finally, the issue of the substrate compatibility of inkjet printed graphene was addressed. Although inkjet printing is considered to have broad substrate versatility, it is unreliable on hydrophilic or porous substrates and most inks (including graphene inks) require thermal annealing that damages substrates that are not resistant to heat. Accordingly, a technique based on inkjet printing and wet transfer was developed to reliably deposit graphene-based MSCs on a number of substrates, including flat, 3D, porous, plastics and biological (plants and fruits) with adverse surfaces.

    The contributions of this thesis have the potential to boost the use of inkjet printed MSCs in applications requiring scalability and resolution (e.g. on-chip integration) as well as applications requiring conformability and versatility (e.g. wearable electronics).

  • Public defence: 2019-09-16 09:00 F3, Stockholm
    del Aguila Pla, Pol
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Inverse problems in signal processing: Functional optimization, parameter estimation and machine learning2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Inverse problems arise in any scientific endeavor. Indeed, it is seldom the case that our senses or basic instruments, i.e., the data, provide the answer we seek. It is only by using our understanding of how the world has generated the data, i.e., a model, that we can hope to infer what the data imply. Solving an inverse problem is, simply put, using a model to retrieve the information we seek from the data.

    In signal processing, systems are engineered to generate, process, or transmit signals, i.e., indexed data, in order to achieve some goal. The goal of a specific system could be to use an observed signal and its model to solve an inverse problem. However, the goal could also be to generate a signal so that it reveals a parameter to investigation by inverse problems. Inverse problems and signal processing overlap substantially, and rely on the same set of concepts and tools. This thesis lies at the intersection between them, and presents results in modeling, optimization, statistics, machine learning, biomedical imaging and automatic control.

    The novel scientific content of this thesis is contained in its seven composing publications, which are reproduced in Part II. In five of these, which are mostly motivated by a biomedical imaging application, a set of related optimization and machine learning approaches to source localization under diffusion and convolutional coding models are presented. These are included in Publications A, B, E, F and G, which also include contributions to the modeling and simulation of a specific family of image-based immunoassays. Publication C presents the analysis of a system for clock synchronization between two nodes connected by a channel, which is a problem of utmost relevance in automatic control. The system exploits a specific node design to generate a signal that enables the estimation of the synchronization parameters. In the analysis, substantial contributions to the identifiability of sawtooth signal models under different conditions are made. Finally, Publication D brings to light and proves results that have been largely overlooked by the signal processing community and characterize the information that quantized linear models contain about their location and scale parameters.

  • Public defence: 2019-09-19 13:00 Kollegiesalen, Stockholm
    Prästings, Anders
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Managing uncertainties in geotechnical parameters: From the perspective of Eurocode 72019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Geotechnical engineering is strongly associated with large uncertainties. Geotechnical site investigations are made only at discrete points and most of a soil volume is never tested. A major issue is therefore how to cost effectively reduce geotechnical uncertainties with respect to structural performance. Managing the geotechnical uncertainties is thus an important aspect of the design process. Guidance on this subject is given in the European design code for geotechnical design, Eurocode 7 (EN 1997), which advocates the use of the partial-factor method, with the added possibility to use the observational method if the uncertainties are large and difficult to assess.This thesis aims to highlight, develop and improve methods to assess the quality and value of geotechnical site investigations through reliability-based design. The thesis also discusses the limitations of the deterministic partial-factor method, according to its EN 1997 definition, and how to better harmonise this design methodology with the risk-based approach of reliability-based design. The main research contributions are: (1) a presented case study showing the importance of and potential gains with a robust framework for statistical evaluation of geotechnical parameters, (2) the discussion on the limitations of the partial-factor method in EN 1997, and (3) the discussion on how to harmonise the EN 1997 definition of the partial-factor method with the risk-based approach of reliability-based design.