12345 51 - 100 of 214
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Teye, Mattias
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Predictive Uncertainty Estimates in Batch Normalized Neural Networks2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Recent developments in Bayesian Learning have made the Bayesian view of parameter estimation applicable to a wider range of models, including Neural Networks. In particular, advancements in Approximate Inference have enabled the development of a number of techniques for performing approximate Bayesian Learning. One recent addition to these models is Monte Carlo Dropout (MCDO), a technique that only relies on Neural Networks being trained with Dropout and L2 weight regularization. This technique provides a practical approach to Bayesian Learning, enabling the estimation of valuable predictive distributions from many models already in use today. In recent years however, Batch Normalization has become the go to method to speed up training and improve generalization. This thesis shows that the MCDO technique can be applied to Neural Networks trained with Batch Normalization by a procedure called Monte Carlo Batch Normalization (MCBN) in this work. A quantitative evaluation of the quality of the predictive distributions for different models was performed on nine regression datasets. With no batch size optimization, MCBN is shown to outperform an identical model with constant predictive variance for seven datasets at the 0.05 significance level. Optimizing batch sizes for the remaining datasets resulted in MCBN outperforming the comparative models in one further case. An equivalent evaluation for MCDO showed that MCBN and MCDO yield similar results, suggesting that there is potential to adapt the MCDO technique to the more modern Neural Network architecture provided by Batch Normalization.

  • Miller Ugalde, Patrick
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Temporally Stable Clusters of Movie Series: A Machine Learning Approach to Content Segmentation2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Clustering techniques have been shown to provide insight in various domains and applications. Adaptive evolutionary spectral clustering is a state-of-the-art method to obtain temporally stable clustering results from time-stamped data. This thesis explores the use of adaptive evolutionary spectral clustering to perform a clustering of film series into groups based on video streaming data. The developed method successfully performs a stable segmentation of film series into groups and introduces a number of extensions to the framework within the context of video on demand. We find that the implemented method allows for reasoning about clusters from an evolutionary perspective and that the state-of-the-art can be extended to introduce a dynamic number of clusters without negatively impacting the stability of properties of clusters.

  • Jyu, Yuanping
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Using Gamification and Augmented Reality to Encourage Japanese Second Language Students to Speak English2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Language anxiety is one of the key problems that hinder language learners to speak a target second language. This problem is especially relevant for Japanese second language learners, who tend to have difficulties in English. To alleviate this problem, researchers have succeeded in leveraging modern technologies to help second language learners practice various language skills, including speaking. These research papers introduce modern technologies into the field of education, which is the guide and basis of this paper. This study aims to help Japanese second language students to overcome the barrier of speaking English by designing a game-based language learning tool that incorporates elements of Augmented Reality (AR) technology.

    In this study, we try to encourage students to speak English by designing an AR-aided cooperative game. The GOAT (Gamified cOmunicAtion Tool) application was developed with gamification and AR technology. The tool was evaluated with 39 second language students at the different stages of its development during a period of eight months. The results suggest that the GOAT app has a high potential to help students to overcome their language anxiety and ultimately the barrier of speaking English. The findings of this study serve to prove that the use of gamification in the designed tool has a positive influence on second language learners. In particular, the GOAT app was found to promote students’ confidence and to encourage communications in a public setting. The enhanced confidence and more frequent communications ultimately lead Japanese students to be able to converse in English in a more natural and fluent manner. Nonetheless, the evidence with regard to whether the use of AR elements imposes a direct positive influence on second language learners’ confidence is not sufficient enough. Further research along this path is recommended for a more concrete conclusion to be made.

  • Svenningsson, Jakob
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Efficient Enclave Communication through Shared Memory: A case study of Intel SGX enabled Open vSwitch2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Open vSwitch is a virtual network switch commonly used to forward network packages between virtual machines. The switch routes network packets based on a set of flow rules stored in its flow tables. Open vSwitch does not provide confidentiality or integrity protection of its flow tables; therefore, an attacker can exploit software vulnerabilities in Open vSwitch to gain access to the host machine and observe or modify installed flow rules.

    Medina [1] brought integrity and confidentially guarantees to the flow tables of Open vSwitch, even in the presence of untrusted privileged software, by confining them inside of an Intel SGX enclave. However, using an enclave to protect the flow tables has significantly reduced the performance of Open vSwitch. This thesis investigates how and to what extent the performance overhead introduced by Intel SGX in Open vSwitch can be reduced.

    The method consisted of the development of a general-purpose communication library for Intel SGX enclaves, and two optimized SGX enabled Open vSwitch prototypes. The library enables efficient communication between the enclave and the untrusted application through shared memory-based techniques. Integrating the communication library in Open vSwitch, combined with other optimization techniques, resulted in two optimized prototypes that were evaluated on a set of common Open vSwitch use cases.

    The results of this thesis show that it is possible to reduce the overhead introduced by Intel SGX in Open vSwitch with several orders of magnitude, depending on the use case and optimization technique, without compromising its security guarantees.

  • Klobusická, Patricia
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Welcome to KTH: designing a tool for sustainable integration of international students: Case Study2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study aims to present a design for a tool for sustainable integration of international students at KTH in Stockholm, Sweden. Integration has 3 main parts, social integration which is interaction with natives, structural which is concerned with a civic number, a job, and last but not least cultural integration which deals with customs, traditions, and religion. The tool has two main features, both of which are aiming to create favourable conditions for all three subsets of integration. The tool was developed by conducting 18 interviews, two rounds of prototyping and two rounds of user testing.

    It is made out of two main parts, namely informational and social. The information provided is both structural about institutions and getting around, whereas also information about cultural events, attendance at these by international students has the potential to strengthen social integration as well. The social part is designed as a 1-on-1 randomised chat that aims to encourage forming new friendships between international students and natives. This part allows new students to ask questions about anything, the process will get them randomly assigned to any native who shall answer which will create favourable conditions for forming new friendships between newcomers and natives.

  • Hammarlund, Hampus
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Designing a Software System to Improve Employee Motivation Through Behavior Change2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Due to unique circumstances, Japan is facing world leading rates of employee dissatisfaction. This trend in turn has had a ripple effect causing many other aspects of employee’s lives to be effected. One such area is work motivation, and in turn overall performance has also suffered. With the large impact this problem is having companies now have a financial incentive to help employees, beyond the social wellbeing argument that could be made before.A solution to this problem that the current project explored is a software system to increase employee motivation through behavior change. Using a software system to collect voluntary data on employees, the individual needs of users can be determined. These individual needs can then be addressed through tailored behavior change intervention.Through the course of the current paper the system’s architecture and evaluation will be covered. The current paper will also include the design of the behavior change intervention used during the experiments. Then from the results of these experiments it will be argued why the system designed and developed was a good solution to the problem of low employee motivation.

  • Persson, Emil
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Evaluating tools and techniques for web scraping2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to evaluate state of the art web scraping tools. To support the process, an evaluation framework to compare web scraping tools is developed and utilised, based on previous work and established software comparison metrics. Twelve tools from different programming languages are initially considered. These twelve tools are then reduced to six, based on factors such as similarity and popularity. Nightmare.js, Puppeteer, Selenium, Scrapy, HtmlUnit and rvest are kept and then evaluated. The evaluation framework includes performance, features, reliability and ease of use. Performance is measured in terms of run time, CPU usage and memory usage. The feature evaluation is based on implementing and completing tasks, with each feature in mind. In order to reason about reliability, statistics regarding code quality and GitHub repository statistics are used. The ease of use evaluation considers the installation process, official tutorials and the documentation.While all tools are useful and viable, results showed that Puppeteer is the most complete tool. It had the best ease of use and feature results, while staying among the top in terms of performance and reliability. If speed is of the essence, HtmlUnit is the fastest. It does however use the most overall resources. Selenium with Java is the slowest and uses the most amount of memory, but is the second best performer in terms of features. Selenium with Python uses the least amount of memory and the second least CPU power. If JavaScript pages are to be accessed, Nightmare.js, Puppeteer, Selenium and HtmlUnit can be used.

  • Peterson, Thomas
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Alternating Control Flow Graph Reconstruction by Combining Constant Propagation and Strided Intervals with Directed Symbolic Execution2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis we address the problem of control flow reconstruction in the presence of indirect jumps. We introduce an alternating approach which combines both overand under-approximation to increase the precision of the reconstructed control flow automaton compared to pure over-approximation. More specifically, the abstract interpretation based tool, Jakstab, from earlier work by Kinder, is used for the over-approximation. Furthermore, directed symbolic execution is applied to under-approximate successors of instructions when these can not be over-approximated precisely. The results of our experiments show that our approach can improve the precision of the reconstructed CFA compared to only using Jakstab. However, they reveal that our approach consumes a large amount of memory since it requires extraction and temporary storage of a large number of possible paths to the unresolved locations. As such, its usability is limited to control flow automatas of limited complexity. Further, the results show that strided interval analysis suffers in performance when encountering particularly challenging loops in the control flow automaton.

  • Swords, Michael
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Finding patterns in procurements and tenders using a graph database2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Graph databases are becoming more and more prominent as a result of the increasing amount of connected data. Storing data in a graph database allows for greater insight into the relationships between the data and not just the data itself.An area that has a large focus on relationship is the area of public procurements. Relationships such as who created which procurement and who was the winner. The procurement data today can be very unstructured or inaccessible which means that there is a low amount of analysis available in the area. To make it easier to analyse the procurement market there is a need for a proficient way of storing the data.

    This thesis provides a proof of concept of the combination of public procurements and graph databases. A comparison is made between two models of different granularity, measuring both query speed and storage size. There has also been an exploration of what interesting patterns that can be extrapolated from the public procurement data using centrality and community detection.The result of the model comparison shows a distinct increase in query speed at the cost of storage size. The result of the exploration is several examples of interesting patterns retrieved using a graph database with public procurement data, which show the potential of graph databases.

  • Engerstam, Sviatlana
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Real Estate Economics and Finance.
    Macroeconomic determinants of apartment prices in Swedish and German citiesManuscript (preprint) (Other academic)
    Abstract [en]

    We study the long-term effects of macroeconomic fundamentals on apartment prices in major urban areas in Sweden and Germany. The panel cointegration analysis was chosen as the primary approach due to the limited availability of data for a more extended period and frequency. The dataset consists of 2 countries – Germany and Sweden. The Swedish dataset includes three major cities and a period of  23 years, while the German dataset includes 7 “Big cities” for 29 years. Pooling the observations allows overcoming data restrictions in econometric analysis of long-term time series such as spatial heterogeneity, cross-sectional dependence and non-stationary, but cointegrated data. The results lie in line with previous studies and also allow comparison of single city estimations in an integrated equilibrium framework. The empirical results indicate that apartment prices react much stronger on changes in fundamentals in major Swedish cities than in German ones despite quite similar underlying fundamentals. Comparative analysis of regulations on the rental market, bank lending policies, and approaches to valuation for mortgage purposes in these two countries provide evidence that this overreaction arises due to institutional differences in form bank lending policies, mortgage valuation practices, and regulations on the rental market. Application of the more sustainable value concept such as mortgage lending value in mortgage valuations could make lending for housing less procyclical and stabilize house prices over the long run. Moreover, it will help to keep house prices away from overreaction on changes in macroeconomic fundamentals.

  • Våge, William
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Using machine learning for resource provisioning to run workflow applications in IaaS Cloud2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The rapid advancements of cloud computing has made it possible to execute large computations such as scientific workflow applications faster than ever before. Executing workflow applications in cloud computing consists of choosing instances (resource provisioning) and then scheduling (resource scheduling) the tasks to execute on the chosen instances. Due to the fact that finding the fastest execution time (makespan) of a scientific workflow within a specified budget is a NP-hard problem, it is common to use heuristics or metaheuristics to solve the problem.

    This thesis investigates the possibility of using machine learning as an alternative way of finding resource provisioning solutions for the problem of scientific workflow execution in the cloud. To investigate this, it is evaluated if a trained machine learning model can predict provisioning instances with solution quality close to that of a state-of-the-art algorithm (PACSA) but in a significantly shorter time. The machine learning models are trained for the scientific workflows Cybershake and Montage using workflow properties as features and solution instances given by the PACSA algorithm as labels. The predicted provisioning instances are scheduled utilizing an independent HEFT scheduler to get a makespan.

    It is concluded from the project that it is possible to train a machine learning model to achieve solution quality close to what the PACSA algorithm reports in a significantly shorter computation time and that the best performing models in the thesis were the Decision Tree Regressor (DTR) and the Support Vector Regressor (SVR). This is shown by the fact that the DTR and the SVR on average are able to be only 4.97 % (Cybershake) and 2.43 % (Montage) slower than the PACSA algorithm in terms of makespan while imposing only on average 0.64 % (Cybershake) and 0.82 % (Montage) budget violations. For large workflows (1000 tasks), the models showed an average execution time of 0.0165 seconds for Cybershake and 0.0205 seconds for Montage compared to the PACSA algorithm’s execution times of 57.138 seconds for Cybershake and 44.215 seconds for Montage. It was also found that the models are able to come up with a better makespan than the PACSA algorithm for some problem instances and solve some problem instances that the PACSA algorithm failed to solve. Surprisingly, the ML models are able to even outperform PACSA in 11.5 % of the cases for the Cybershake workflow and 19.5 % of the cases for the Montage workflow.

  • Li, Yingyu
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Object Detection and Instance Segmentation of Cables2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis introduces an innovative method to detect and do segmentation of cables for visual inspection. Cables lack significant features and fixed structures, which are difficult to capture with a cluttered background. This method is based on cable color and cable width in a specific scenario. It takes a splitand-merge approach to do detection and segmentation. This method can be used to inspect the status of cables on radio towers for maintenance and damage assessment by analyzing photos captured by unmanned aerial vehicles (UAV). This method to detect cables may also be beneficial to fields of navigation of UAV and navigation of autonomous underwater vehicles. With a loose metric with IoU of 30%, the mean precision reaches 50.79%, and the mean recall reaches 55.96%.

  • Huang, Mengdi
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Consistent Outdoor Environment Mapping with a 3D Laser Scanner2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Simultaneous Localization and Mapping(SLAM) is an essential prerequisite for various automated systems, such as self-driving cars, and planetary rovers. These autonomous machines acquire the knowledge of the environment through building a map while exploring an unknown area. Without this knowledge, they are not able to make right decisions.

    We used a 3D laser scanner with 16 channels, and encoders to collect the internal and external information. Then we estimate the trajectory that the robot has been to and build a consistent map upon sensor data. In this project, we studied and proposed several ways to improve the existing registration and optimization methods. By adding odometry information to predict the initial estimation as the input for Normal Distribution Transform(NDT), the performance is proved to be boosted. And the result witnessed a satisfying change after we distribute the error before inputing the registration result into g2o optimization framework. Besides, we proposed a method to increase the weight of certain voxels to improve the performance of NDT. We also experimented with different configurations of NDT and g2o to see how they impact the results.

    We conducted and analyzed the experiment on two datasets, urban dataset and forest dataset. The mapping is considered successful in the urban dataset.

  • Yeramian, Kevin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Autonomous testing of web forms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A web form requires filling it with correct information in order to access pages behind it. As a result web forms tend to hinder automatic navigation through web sites. In order to fill a web form, we are going to extract relevant information contained in the HTML. Difficulty arises when taking into account the fact that that visual web pages are designed to be read by humans and not by robots. A human user can easily extract the information contained in a web form that is necessary to fill it. Extraction of visual information for automatic filling of web forms is an ongoing topic of research, which has already provided interesting results. However the task of indexing web sites continues to require some human intervention. The following thesis exposes a novel method of extracting visual as well as hidden information and automatically label each field composing a web form. The classification step boils down to finding keywords and then associating them with a label by using the mechanism validation and submission of web forms. These labeled data are then used to train machine learning models that aim at classifying text from given fields of a web form. A comparison between two different methods of classification illustrates the poor results obtained by the machine learning models when compared to the new methods based on keywords.

  • Androulakaki, Theofronia
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Probing User Perceptions on Machine Learning2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Machine Learning is a technology that has risen in popularity in the last decade. Designers face difficulties in working with Machine Learning as a design material. In order to help designers to cope with this material, many different approaches have been suggested, from books to insights of experienced designers with Machine Learning. In this research, the focus is on the users’ perceptions on Machine Learning and how these could contribute to better design. For this purpose, 10 participants deployed probes to investigate the term Machine Learning. Probes consisted of simple tasks that provoked participants to recognize Machine Learning elements in applications they already use and were deployed with the use of their smart phones. Participants formed personalized perceptions on Machine Learning which varied from creativity in Machine Learning to preoccupations about data use. Based on these findings, suggestions to designers were proposed. Moreover, a secondary research question that emerged was the difficulties the researcher faced while working with probing on Machine Learning user experiences for the specific research.

  • Karlsson, Viktor
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Introducing a Hierarchical Attention Transformer for document embeddings: Utilizing state-of-the-art word embeddings to generate numerical representations of text documents for classification2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The field of Natural Language Processing has produced a plethora of algorithms for creating numerical representations of words or subsets thereof. These representations encode the semantics of each unit which for word level tasks enable immediate utilization. Document level tasks on the other hand require special treatment in order for fixed length representations to be generated from varying length documents.

    We develop the Hierarchical Attention Transformer (HAT), a neural network model which utilizes the hierarchical nature of written text for creating document representations. The network rely entirely on attention which enables interpretability of its inferences and context to be attended from anywhere within the sequence.

    We compare our proposed model to current state-of-the-art algorithms in three scenarios: Datasets of documents with an average length (1) less than three paragraphs, (2) greater than an entire page and (3) greater than an entire page with a limited amount of training documents. HAT outperforms its competition in case 1 and 2, reducing the relative error up to 33% and 32.5% for case 1 and 2 respectively. HAT becomes increasingly difficult to optimize in case 3 where it did not perform better than its competitors.

  • Tatarakis, Nikolaos
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Differentially Private Federated Learning2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Federated Learning is a way of training neural network models in a decentralized manner; It utilizes several participating devices (that hold the same model architecture) to learn, independently, a model on their local data partition. These local models are then aggregated (in parameter domain), achieving equivalent performance as if the model was trained centrally. On the other hand, Differential Privacy is a well-established notion of data privacy preservation that can provide formal privacy guarantees based on rigorous mathematical and statistical properties. The majority of the current literature, at the intersection of these two fields, only considers privacy from a client’s point of view (i.e., the presence or absence of a client during decentralized training should not affect the distribution over the parameters of the final (central) model). However, it disregards privacy at a single (training) data-point level (i.e., if an adversary has partial, or even full access to the remaining training data-points, they should be severely limited in inferring sensitive information about that single data-point, as long as it is bounded by a differential privacy guarantee). In this thesis, we propose a method for end-to-end privacy guarantees with minimal loss of utility. We show, both empirically and theoretically, that privacy bounds at a data-point level can be achieved within the proposed framework. As a consequence of this, satisfactory client-level privacy bounds can be realized without making the system noisier overall, while obtaining state-of-the-art results.

  • Yu, Shi
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Exploring Use Cases for an Artificial Intelligence Poet2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    I report on the iterative process of designing a mobile AI poetry system, along with a series of broad scale use cases in which different variants of the system has been tested in the wild. The project has so far resulted in the generation of about 20 million individual poems, co-created by the system together with millions of users. Apart from the design of the technical side of the system, my focus has been on how the system could be adapted to and deployed in different commercial settings. I discuss my insights related to systems support for creative processes, and how findings from these use cases could be applicable also to other AI content generation systems.

  • Hallgrímsson, Guðmundur
    KTH, School of Electrical Engineering and Computer Science (EECS).
    An Embedded System for Classification and Dirt Detection on Surgical Instruments2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The need for automation in healthcare has been rising steadily in recent years, both to increase efficiency and for freeing educated workers from repetitive, menial, or even dangerous tasks. This thesis investigates the implementation of two pre-determined and pre-trained convolutional neural networks on an FPGA for the classification and dirt detection of surgical instruments in a robotics application. A good background on the inner workings and history of artificial neural networks is given and expanded on in the context of convolutional neural networks. The Winograd algorithm for computing convolutional operations is presented as a method for increasing the computational performance of convolutional neural networks. A selection of development platform and toolchains is then made. A high-level design of the overall system is explained, before details of the high-level synthesis implementation of the dirt detection convolutional neural network are shown. Measurements are then made on the performance of the high-level synthesis implementation of the various blocks needed for convolutional neural networks. The main convolutional kernel is implemented both by using the Winograd algorithm and the naive convolution algorithm and comparisons are made. Finally, measurements on the overall performance of the end-to-end system are made and conclusions are drawn. The final product of the project gives a good basis for further work in implementing a complete system to handle this functionality in a manner that is both efficient in power and low in latency. Such a system would utilize the different strengths of general-purpose sequential processing and the parallelism of an FPGA and tie those together in a single system.

  • Belo, Pedro
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Heterogeneous 3D Exploration with UAVs2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Multi-robot exploration algorithms usually focus on exploration time minimization while ignoring map accuracy. In this thesis, it is presented a new heterogeneous multi-robot exploration strategy that finds a balance between time consumption and map accuracy. By ranking UAVs based on their sensor accuracy, it is possible to coordinate them and pick strategic points to explore rather than the most rewarding ones. In particular, with a function (in this case a Gaussian) that maps a voxel’s uncertainty to a score, it is possible to tailor a UAV’s preference (by tuning expected value and variance) towards certain features and not only unexplored space. This algorithm was compared with AEPlanner for several environments, achieving better accuracy towards complete map exploration.

  • Wei, Xiao
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Deep Active Learning for 3D Object Detection for Autonomous Driving2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    3D object detection is vital for autonomous driving. However, to train a 3D detector often requires a huge amount of labeled data which are extremely expensive and tedious to obtain. In order to alleviate the annotation effort while maintaining detection performance, we aim to adopt active learning framework for training a 3D object detector with the least amount of labeled data. In contrast with the conventional passive learning that a machine learning model is trained on a pre-determined training dataset, active learning allows the model to actively select the most informative samples for labeling and add them to the training set. Therefore, only a fraction of data need to be labeled. To the best of our knowledge, this thesis is the first that studies active learning for 3D detection.

    We take progressive steps towards the goal. There are three stages with increasingly complex models and learning tasks. First, we start with active learning for image classification which can be viewed as a sub-problem of object detection. Second, we investigate and build a multi-task active learning framework with a deep refinement network for multi-modal 3D object detection. Lastly, we further analyze multi-task active learning with a more complicated two-stage 3D LiDAR vehicle detector. In our experiments, we study the fundamental and important aspects of an active learning framework with an emphasis on evaluating several popular data selection strategies based on prediction uncertainty. Without bells and whistles, we successfully propose an active learning framework for 3D object detection using 3D LiDAR point clouds and accurate 2D image proposals that saves up to 60% of labeled data on a public dataset. In the end, we also discuss some underlying challenges on this topic from both academic and industrial perspectives.

  • Ma, Yanwen
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Exploiting mobile technology affordances to support second language students using affective learning2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Self-regulated learning (SRL) which relates to challenges concerning both cognitive and affective learning domains is directly associated with students’ academic performance. It is especially critical for second language learners who need to employ SRL strategies and skills to be able to acquire the target language effectively. However, these students need help to develop their SRL, since the majority of them are not capable to make accurate judgments about their learning processes.

    This study aims at facilitating second language learners to develop their affective learning skills and strategies needed for their successful acquisition of a studied second language. In this design-oriented case study, a special mobile tool, ATLAS (AffecTive LeArning Srl) was designed and evaluated with 13 second language students through semi-structured interviews. All the interviews were carried out by the author of this thesis. Written informed consent was obtained from all participants for their issues to be utilized for this work. The interview data was later anonymized. The results showed that 85% of the study participants exhibited positive attitudes towards the use of affective learning activities in the tool to support their development of SRL during their second language studies. In particular, the ATLAS tool was perceived to be able to increase student motivation for SRL and to increase their awareness of their SRL progress.

    All in all, this study stresses that it is beneOicial to use technology-supported affective learning in order to assist students in their development of SRL skills, strategies, and knowledge needed for their successful second language acquisition. From a practical perspective, this study also provides a tool and several design guidelines that should be considered by designers when designing similar tools.

  • Persson, Alexander
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Redesigning a graphical user interface for usage in challenging environments with a user-centered design process2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Different possible interactions with computers is an ever-evolving topic and the usage of computers are more ubiquitous than ever. To design with the user in mind is not an easy task with regular use cases and interactions at hand. Designing for users in a military context can be even more difficult as the working environment of said users is demanding. This thesis sets out to investigate how a redesign of an existing GUI can reduce the impact of the contextual challenging environment of operating a software in terrain vehicles and in outdoor weather. For the redesign of the GUI a user-centered design process was performed. The process was initiated by using the method of contextual interviews and affinity diagram for data gathering and analysis,

    which gave a deeper understanding of the user’s issues and needs. After defining the different key elements for the redesign, a prototype was developed. The first prototype was evaluated by experienced users of the software out of military context. With the feedback from the users another developed version of the software was created and evaluated by current users of the software with an interview in military context. The evaluation showed that the users believed that the redesign of the GUI would help mitigate problems caused by the challenging context the software is used in, as well as improve quality of work.

  • Bergström, Philip
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Multimodal Relation Extraction of Product Categories in Market Research Data2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays, large amounts of unstructured data are constantly being generated and made available through websites and documents. Relation extraction, the task of automatically extracting semantic relationships between entities from such data, is therefore considered to have high commercial value today. However, many websites and documents are richly formatted, i.e., they communicate information through non-textual expressions such as tabular or visual elements. This thesis proposes a framework for relation extraction from such data, in particular, documents from the market research area. The framework performs relation extraction by applying supervised learning using both textual and visual features from PDF documents. Moreover, it allows the user to train a model without any manually labeled data by implementing labeling functions.We evaluate our framework by extracting relations from a corpus of market research documents on consumer goods. The extracted relations associate categories to products of different brands. We find that our framework outperforms a simple baseline model, although we are unable to show the effectiveness of incorporating visual features on our test set. We conclude that our framework can serve as a prototype for relation extraction from richly format-ted data, although more advanced techniques are necessary to make use of non-textual features.

  • Talts, Ülle-Linda
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Characterization of graphene-based sensors for forensic applications: Evaluating suitability of CVD graphene-based resistive sensor for detection of amphetamine2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Recent improvements in sensor technology and applications can be partly attributed to the advancements in microand nanoscale fabrication processes and discovery of novel materials. The emergence of reliable and inexpensive methods of production of monolayer materials, such as graphene, has revealed the advantageous electronic properties which when utilized in sensory elements can significantly enhance response to the input signal. Hence, graphene-based sensory devices have been widely investigated as the exotic properties of the carbon nanomaterial allow for cost-efficient scalable production of highly sensitive transduction elements. Previous studies have shown successful detection of n-type dopants such as ammonia and low pH solution. As the amine group in amphetamine molecules is known to behave as an electron donor, in this study, graphene conductivity changes in response to exposure to amphetamine salt solutions were investigated.Graphene formed by chemical vapour deposition (CVD) was transferred onto SiO2 substrate with gold electrodes to form a resistive transducer. Observation of large intensity ratio of graphene characteristic 2D and G peaks as well as minimal defect peaks from Raman spectroscopy analysis proved the integrity of the carbon monolayer was maintained. The atomic force microscopy and resistance measurements results showed the storage of these sensory elements in ambient conditions results in adsorption of impurities which considerably influence the electronic properties of graphene. Upon exposure to amphetamine sulfate and amphetamine hydrochloride, conductivity decrease was detected as expected. Signal enhancement by excitation of 470nm light did not show a significant increase in response magnitude. However, the low reliability of sensor response limited further analysis of the chemical sensor signal. Non-selective sensor response to amphetamine can be detected, but improvements in device design are needed to minimize contamination of the graphene surface by ambient impurities and variations in the sensor system.

  • Zervakis, Georgios
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Multivariate analysis of the parameters in a handwritten digit recognition LSTM system2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Throughout this project, we perform a multivariate analysis of the parameters of a long short-term memory (LSTM) system for handwritten digit recognition in order to understand the model’s behaviour. In particular, we are interested in explaining how this behaviour precipitate from its parameters, and what in the network is responsible for the model arriving at a certain decision. This problem is often referred to as the interpretability problem, and falls under scope of Explainable AI (XAI). The motivation is to make AI systems more transparent, so that we can establish trust between humans. For this purpose, we make use of the MNIST dataset, which has been successfully used in the past for tackling digit recognition problem. Moreover, the balance and the simplicity of the data makes it an appropriate dataset for carrying out this research. We start by investigating the linear output layer of the LSTM, which is directly associated with the models’ predictions. The analysis includes several experiments, where we apply various methods from linear algebra such as principal component analysis (PCA) and singular value decomposition (SVD), to interpret the parameters of the network. For example, we experiment with different setups of low-rank approximations of the weight output matrix, in order to see the importance of each singular vector for each class of the digits. We found out that cutting off the fifth left and right singular vectors the model practically losses its ability to predict eights. Finally, we present a framework for analysing the parameters of the hidden layer, along with our implementation of an LSTM based variational autoencoder that serves this purpose.

  • Public defence: 2020-04-24 10:00
    Harahap, Fumi
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology.
    Exploring synergies between the palm oil industry and bioenergy production in Indonesia2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Climate change along with increasing demand for food and fuel call for sustainable use of natural resources. One way to address these concerns is through efficient use of resources, which is also vital for the achievement of the Sustainable Development Goals and the Paris Agreement. In this context, the sustainable and efficient use of resources in the palm oil industry is an interesting case to scrutinise. This is particularly important for Indonesia, the leading palm oil producer in the world. Large quantities of oils and biomass are generated from oil palm plantations and processing, presenting the potential for the development of bio-based production systems. However, at present, sustainability is a matter of great concern in this industry, including land use issues and the fact that large portions of the residues generated are untreated, releasing greenhouse gas emissions, and imposing environmental threats.

    This doctoral thesis aims at exploring how resource efficiency can be enhanced in the palm oil industry. Three research questions are posed to address the objective. The first question examines the sectoral policy goals of biofuel, agriculture, climate, and forestry and their requirements for land. The second question is focused on new industrial configurations for efficient use of palm oil biomass for bioenergy production. The final question summarises the role of enhancing resource efficiency in the palm oil industry with regards to meeting the national bioenergy targets, which include 5.5 GWe installed capacity and biofuel blending with fossil fuels (30% biodiesel blending with diesel and 20% ethanol blending with gasoline) in the transport, industry, and power sectors. The research questions are explored using three main methods: policy coherence analysis, techno-economic analysis, and a spatio-temporal optimisation model (BeWhere Indonesia).

    The thesis identifies areas in which policy formulation, in terms of sectoral land allocation, can be improved. Adjustments and improvements in policy formulation and implementation are crucial for land allocation. The inconsistencies in the use of recognised land classifications in the policy documents, the unclear definition of specific land categories, and the multiple allocation of areas, should be addressed immediately to ensure coherent sectoral policies on land allocation. This can lead to more effective policy implementation, reduce pressure on land, enhance synergies, and resolve conflicts between policy goals.

    The transition towards a more sustainable palm oil industry requires a shift from current traditional practices. Such transition involves efficient use of palm oil biomass resources through improved biomass conversion technologies and integration of palm oil mills with energy production in biorefinery systems. The upgrading of the conventional production systems can serve multiple purposes including clean energy access and production of clean fuels for the transport, industry, and power sectors, ultimately helping the country meet its renewable energy and sustainable development targets, along with reduced emissions. More specifically, the efficient use of biomass and co-production of bioenergy carriers in biorefineries can enable Indonesia to reach its targets for bioenergy installed capacity and bio-based blending.

    At present, many government policies in Indonesia are working in the right direction. Nevertheless, various barriers still need to be overcome so that resource efficiency can be improved. This includes harnessing the full potential of bioenergy in the palm oil industry. There is room for enhancing the sustainability of the palm oil industry in Indonesia with adjustments to existing policies and practices, as shown in this thesis. First, guidance across sectoral policies can help to coordinate the use of basic resources. Second, the shift from traditional practices requires a strategy that includes improvement in agricultural practices (i.e., higher yields), infrastructure for biomass conversion technologies together with improved grid connectivity, and adoption of a biorefinery system. Strengthening policy support is needed to promote such a comprehensive shift. Third, various programmes can forge partnerships between oil palm plantations, the palm oil mills, and energy producers to ensure the development of sustainable industrial practices. A sustainable palm oil industry will improve resource and cost efficiency, and help open international markets for Indonesian products. This could pave the way for an enhanced role for the Indonesian palm oil industry in global sustainability efforts.

  • Santesmases, David Ramos
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Automated noise measurements for very long wavelength infrared detectors2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The potential imaging performance of infrared (IR) detectors is dependent on the noise level in the detector pixels. Noise measurements at pixel level can therefore provide basic understanding of the intrinsic limitations of detectors. Accurate noise studies with measurements at different biases and with high frequency resolution can however consume a lot of time; therefore, automation of this process is necessary.

    In this thesis, automation of a noise measurement setup has been implemented. The noise measurement system that has been automated has proven to work for automatically acquiring noise spectra from long wavelength infrared detectors of different types, such as Quantum Well Infrared Photodetectors (QWIPs) and Type II-superlattice (T2SL) detectors.

    The results of these studies have been used to calculate the noise gain of the detectors. It also has been key to determine discrepancies between different QWIP fabrication batches and helped to clarify the differences in performance of the detectors from those batches. Regarding T2SL, noise measurements on detectors with big differences in dark current have been carried out. Finally, a study of the impact of pixel shape in T2sL noise has been conducted.

  • Ferles, Alexandros
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Employing Attention-Based Learning For Medical Image Segmentation2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Automated medical image analysis is a non-trivial task due to the complexity of medical data. With the advancements made on Computer Vision through the golden era of Deep Learning, many models which rely on Deep Convolutional Networks have emerged in the Medical Imaging domain and offer important contributions in automating the analysis of medical images. Based on recent literature, this work proposes the adaptation of visual attention gates in Fully Convolutional Encoder-Decoder networks in the Medical Image Segmentation task. Appropriate data pre-processing is performed in the cases of 2-dimensional and 3-dimensional data in order to serve them as proper inputs in conventional and attention-gated Deep Convolutional Networks that try to identify classes in pixel and voxel level respectively. Attention gates can be easily integrated in the conventional networks, that would improve their performance. We present the specific mechanics of attention gates, conduct experiments and analyse our derived results. Finally, based on the latter, we provide our opinion and intuition on how this work can be further expanded towards new research directions.

  • Brynjarsson, Gudjon Ragnar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Classifying nuclei in soft oral tissue slides2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A big focus of pathology is the analysis of tissues, cells, and body fluid samples. In recent years, with the advent of high resolution scanners, we have seen an increase in the use of artificial intelligence in pathology. In this work we present a new dataset (KI dataset), compiled from slides of soft oral tissue and the present nuclei labeled by pathologists at the Department of Dental Medicine at Karolinska Institutet. We test the performance of two fully trained neural networks along with one fine-tuned deep model on the KI dataset, aimed at classifying nuclei from the slides into one of three classes. The first fully trained network is a shallow CNN, with the second fully trained network being a slightly deeper version of the first one. Both of these networks have previously been used for the task of nuclei classification, but on different datasets. We also test the performance of a fine-tuned VGG16 model, where we train the last layer of a model pre-trained on the widely used ImageNet dataset. The fine-tuned deep neural network (VGG16) produces promising results and we show that the fully trained models perform better on the KI dataset, compared to their results on a dataset of colon cancer slides, leading us to believe that the dataset is good and can be used in further research.

  • Athira, Athira
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Automated tracking and analysis of mouse whisker movements2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Whisking behaviour, observed in animals like rodents and marsupials, have been of great interest to neuroscientists, as a model for the study of mechanisms behind active sensing. Most of the studies of the whisking behaviour utilize video recordings of the movement. Since manual tracking of whisking in each of the video frame is unfeasible, it becomes necessary to automate the tracking of whisker fibres across the frames. To this end, we utilize a deep learning based framework called DeepLabCut to identify whiskers in the video frames. The work shows that it is possible to track individual whisker fibres in untrimmed mice using mono-ocular recordings of free whisking, without use of any markers.The results from tracking across the recordings were used to derive parameters like whisker angles and velocities, which were used to define the whisker movement in further analyses. Further, this time series data obtained from the tracking were modelled to examine the possibility of clustering the data into actively whisking and non-whisking bouts. It was observed that a Markov Random Field-based method (Toeplitz Inverse Covariance clustering) models the transitions between whisking and non-whisking bouts with a better temporal consistency when compared to a Gaussian Hidden Markov Model.

  • Wu, Ching-An
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Investigation of Different Observation and Action Spaces for Reinforcement Learning on Reaching Tasks2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Deep reinforcement learning has been shown to be a potential alternative to a traditional controller for robotic manipulation tasks. Most of modern deep reinforcement learning methods that are used on robotic control mostly fall in the so-called model-free paradigm. While model-free methods require less space and have better generalization capability compared to model-based methods, they suffer from higher sample complexity which leads to the problem of sample ineffi ciency. In this thesis, we analyze three modern deep reinforcement learning, model-free methods: deep Q-network, deep deterministic policy gradient, and proximal policy optimization under different representations of the state-action space to gain a better insight of the relation between sample complexity and sample effi ciency. The experiments are conducted on two robotic reaching tasks. The experimental results show that the complexity of observation and action space are highly related to the sample effi ciency during training. This conclusion is in line with corresponding theoretical work in the field.

  • Picó Oristrell, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Investigating Deep Learning algorithms for end-to-end language-based interaction with domestic robots2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A socially assistive robot capable of helping with domestic work through understanding natural language instructions is still considered a difficult challenge. This work investigates how deep learning algorithms could help us to achieve this goal. Specifically, it focuses on solving the problem of enabling robots to identify objects while navigating in a house environment with language-based interactions. The proposed challenge is solved by implementing three different models. The first model relates the home objects to its typical locations in home regions by solving a classification problem through a neural network architecture. The second model is focused on navigating by understanding language-based commands. This model is solved through a LSTM-based sequence-to-sequence model with an attention mechanism over the language instructions, based on Anderson et al. [1] work. Finally, the third one is centered on identifying the target object by comprehending its associated referring expression. This last model is based on Hatori et al. [2] listener model. Each model is evaluated by using different data-sets suitable to the task. To accomplish the thesis, Matterport3D simulator is used as the main home environment. The purpose of this work is to analyse and study the limitations of the current solutions and the possible problems that we could face when implementing this in a real scenario. Hence, limitations and conclusions from each of the steps are properly stated.

  • Ali, Saman
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Reducing gap between customers perception of the product online and the product in real2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Customer buying a product online creates a perception of that product in mind. If the product received in reality does not match their perception, it results in returning of that product. Small scale companies selling products online suffer if their return rates are high. Rabta is one such company who is facing this issue and want to minimize it through their product page design. It is interesting to study how a product must be displayed online such that it reduces this perception gap.

    In this thesis a double diamond approach is used. Data was collected through interviews, observation and videos. Results show that pliable interactive display can solve this issue to some extent. A prototype was designed with easy to integrate functionalities. Feedback from prototype testing reveals that sense of touch, sound and vision combined together can help in perceiving material and color of the product. If these two properties are judged correctly by the user, it can reduce the gap of perception.

  • Bakhtawar Shah, Mahmood
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Anomaly detection in electricity demand time series data2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The digitalization of the energy industry has made tremendous energy data available. This data is utilized across the entire energy value chain to provide value for customers and energy providers. One area that has gained recent attention in the energy industry is the electricity load forecasting for better scheduling and bidding on the electricity market. However, the electricity data that is used for forecasting is prone to have anomalies, which can affect the accuracy of forecasts.

    In this thesis we propose two anomaly detection methods to tackle the issue of anomalies in electricity demand data. We propose Long short-term memory (LSTM) and Feed-forward neural network (FFNN) based methods, and compare their anomaly detection performance on two real-world electricity demand datasets. Our results indicate that the LSTM model tends to produce a more robust behavior than the FFNN model on the dataset with regular daily and weekly patterns. However, there was no significant difference between the performance of the two models when the data was noisy and showed no regular patterns. While our results suggest that the LSTM model is effective when a regular pattern in data is present, the results were not found to be statistically significant to claim superiority of LSTM over FFNN.

  • Maziere, Louis
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Migration plan of Risky Total Return Swap to Bond Return Swap2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Since the 2008 crisis, the hedging instruments have gained popularity with financial institutions. This is the case of the total return swap that is used today by major institutions like Goldman Sachs or J.P. Morgan. Murex is a software provider for financial institutions. The company already had a total return swap product, the RTRS (for Risky Total Return Swap), but with the growing demand Murex decided to develop a new product, the BRS (Bond Return Swap). So now they have two bond total return swaps.

    This master thesis aims to analyze total return swap and highlight the improvement of the BRS. After a theoretical analysis of the total return swap, a test campaign is realized. For different types of bond and different configurations of total return swap, formulas are derived to be compared to the returned values. The results given by the RTRS are good on basic bonds. If the bond is more complex, for instance a bond with credit risk or an amortized bond, the values returned by the RTRS are not reliable if not wrong. On the other hand, the BRS performs well in every situation and positions itself as the best total return swap proposed by Murex.

  • Bennet, Hannes
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Nokelainen, Nina
    KTH, School of Electrical Engineering and Computer Science (EECS).
    How to improve digital communication within course offerings2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A well working communication between students and lecturers, both in and outside of class, has shown to be very important for a positive learning experience and an effective learning environment. Students need to have a critical mindset of their own cognitive reasoning and their learning process. With support from the lecturer, this will improve the student’s selfassessment but also their academic journey through their higher education. As students are becoming more native online, it is of utter most importance the online communication works as well as the direct physical communication does. This can be achieved through having a learning management system (LMS) that assists lecturers with handing out information and assists students with an easy way to receive information regarding their educational work.

    This study aims to examine whether the current LMS systems are sufficient enough. This by gathering data from both students and lecturers at two different universities through a survey and six semi-structured interviews to find out what the current issues are. The gathered data made it possible to determine the specific needs of both students and lecturers for a well-functioning LMS. DeLone and McLean created a system success model (D&M model), aimed to analyse the quality of an information system, which could be applicable to an LMS. The model consists of different variables and their relationship to one another. An adaptation of the D&M model is the Hexagonal E-learning assessment model (HELAM model) that includes similar but adapted variables for the information system. By comparing the results with the variables from these models it was concluded that both universities face similar issues, even though their systems are different. The results also indicated that there are significant variables in the D&Mand HELAM model that relates to how learning management systems are used to its best potential.

  • Juzovitski, Emil
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Soft-Subspace Clustering on a High-Dimensional Musical Dataset2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Clustering Analysis can be used to solve various tasks. In this thesis, we look at the possibility of using clustering techniques to help generate novel music playlists by clustering a high dimensional dataset of songs. We compare how a newer category of clustering methods called Soft-subspace clustering (SSC), which weighs features independently for each cluster, performs compared to the traditional k-means algorithm. The SSC algorithms of EWKM (Entropy Weighted k-means), FSC (Fuzzy Subspace-Clustering), and LEKM (LogTransformed Entropy Weighting k-means) were tested on a 5 104 sample of the dataset. Parameters were tuned based on an external validation index. The best performing SSC algorithm, which ended up being LEKM, was then compared to the results of k-means through a committee of judges with professional music composing experience. The results show that both LEKM and k-means are capable to cluster the dataset and generate novel clusters. Both algorithms create clusters of high general quality, but there is no shown benefit of using LEKM over k-means on the given dataset. For a more conclusive result, a larger sample dataset would be needed.

  • Public defence: 2020-04-20 15:00 Stockholm
    Pitt, Christine
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Automated Text Analysis of Online Content in Marketing: Dictionray-Based Methods and Artificial Intelligence2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Far more than products or services, words are the most fundamental element in the exchanges between sellers and buyers. Understanding the words that constitute the text that is created when sellers and buyers interact with each other is therefore critical for marketing decision makers. This has become especially relevant in the age of the internet, and particularly with the advent of social media. In the pre-computer age, the content analysis of text was a time-consuming, laborious and frequently error-prone tool for marketing scholars and practitioners to use. Now, powerful computers and software enable the content analysis of text to be performed rapidly, and with little human effort or error. Two fundamental types of tools that enable the automated analysis of text are those that are dictionary-based and those that are artificial intelligence-based. The former automated text analysis tools rely on pre-constructed dictionaries, and then scan a piece of text in order to count and match the words in it to obtain scores on the dimensions of interest. Artificial intelligence-based automated text analysis tools employ machine learning algorithms to recognize patterns in text. They compare text to other pre-classified texts, having been trained by human experts to recognize the desired dimensions of a construct, and can “learn” to do this more effectively the more they are used. 

    The service dominant logic perspective on marketing holds that value is co-created by both sellers and buyers. This enables the identification of two fundamental marketing focus activities. First, sellers and buyers engage in acts of creation; second, sellers and buyers engage in acts of experience. On a wide range of forums, both buyers and sellers create text about these marketing focus activities. This text lends itself to analysis by the two categories of automated text analysis tools. Therefore, the central question is: How can automated textual analysis tools enable marketing practitioners and scholars to gain insights from different types of textual data? 

    Marketing scholars have recently given more attention to the use of automated text analysis tools in marketing research. These efforts have included overviews of the approach, suggestions on choosing amongst methods, and considerations of the sampling and statistical issues unique to automated text analysis. Less emphasis has been placed on specifically examining the use of the two different types of automated text analysis tools (dictionary-based and artificial intelligence based) in exploring the text generated by sellers and buyers in the context of the focal marketing activities of creation and experience. The current research therefore explores the following four research questions:

     

    • RQ1: What insights can an artificial intelligence-based automated text analysis tool deliver from depth interviews with respondents engaged in a creative focus activity?
    • RQ2: What insights can an artificial intelligence-based automated text analysis tool deliver from online reviews by respondents engaged in an experience focus activity?
    • RQ3: What insights can a dictionary-based automated text analysis tool deliver from online reviews by respondents engaged in an experience focus activity?
    • RQ4: What insights can a dictionary-based automated text analysis tool deliver from online interviews by respondents engaged in a creation focus activity

     

    The empirical part of this research covered four papers, all of which involved analyzing textual data with the two categories of automated text analysis tools. Two of these papers used artificial intelligence-based automated text analysis tools in both the creation and experience settings, and the other two used dictionary-based automated text analysis tools, again, in these settings. 

    The overall contribution to the body of knowledge is to provide evidence of the applicability of both artificial intelligence-based- and dictionary-based automated text analysis tools in two fundamental marketing focus activities, namely, creation and experience. The individual papers also 

     

    further our understanding of the use of automated text analysis to study comparisons between groups, as well as correlation between traits and ways of speaking within samples of text.

    The document is organised as an overall introduction to the research narrative of four related published papers. The document opens with a chapter providing an overview of automated text analysis in marketing, the statement of the overall research problem, and the identification of four 

    research sub-questions. This is followed by a chapter on the literature review. Next is a chapter on the methodology used in the studies. The fourth chapter considers the four papers in more detail, acknowledging their limitations, identifying the implications for marketing practice, and suggesting avenues for future research by marketing scholars. The four papers follow under Chapter 5 at the end. Three of these papers have either been published or accepted for publication; the other is in the second round of revision and resubmission.

  • Yang, Yi
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Automatic Online Calibration Between Lidar and Camera2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Online calibration between the Lidar and camera sensors is a prerequisite for the sensor fusion in autonomous driving. This master thesis project proposes a real-time algorithm to find six calibration parameters, (X, Y, Z) for translation, (roll, pitch, yaw) for rotation between the coordinates of two sensors. The algorithm can achieve two goals. First, it can detect miss-calibration as an inspector of the system to ensure it keeps well-calibrated. Moreover, The algorithm also can correctly refine the parameters during a vehicle’s movement if the initial parameters contain some error. Experiments are tested with the public available KITTI dataset. The results from the experiments are competitive or even better results for some parameters compared with other existing methods.

  • Polychronis Lioliopoulos, Alexandros
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Facilitating ideation and knowledge sharing in large organisations: Design of an innovation platform using gamification elements2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Large organizations often constrain their innovation quests only inside the silos of dedicated departments. However, in the rapidly changing world, innovation processes needs to be opened to wider circles. This study investigates the facilitation of knowledge sharing in large organizations and the effect of gamification on the perceived engagement of users. The specific use case was the nordic bank Nordea, where an innovation platform (i.e., a virtual place where the employees can share their ideas) was designed in two variations; a conventional and a gamified one. The study followed the principles of design thinking, starting with some initial user research (10 interviews) and getting to a prototype design that was ultimately tested among 7 employees. The conventional design was experienced to be good by the majority of the study participants, who in particular found it simple and usable. However, some of them experienced it to be boring and in general, it did not excite them. The gamified design on the other hand, had a more universal acceptance. The respondents stressed that they would be motivated to use the platform on a regular basis because of the elements of gamification. More specifically, study participants appreciated the point-system a lot, and also the ability to compare themselves to peers and compete against their fellow colleagues. In fact, all participants of this study preferred the gamified version when asked about which of the two designs they would prefer to use daily. However, one of the quantitative metrics that were used, namely the Subjective Perception of Time, contradicted the findings from the interviews, leaving space for further investigation. All in all, the results of this study suggest that in large organizations, there is the potential of opening up the innovation processes and engaging employees in them. Adding elements of gamification on such attempts can prove to be a great enhancement, since it can increase the engagement of the employees and hook them in the innovation loop, bringing multiple benefits to the company.

  • Broqvist Widham, Emil
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Tactical route planning in battlefield simulations with inverse reinforcement learning2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this report Deep Maximum Entropy Inverse Reinforcement Learning has been applied to the problem of route planning in rough terrain, while taking tactical parameters into account. The tactical parameters that the report focuses on is to avoid detection from predetermined static observers by keeping blocking terrain in the sight line. The experiments have been done in a simplified gridworld using generated "expert" routes in place of an actual expert. The purpose of the project is to serve as an indication that this algorithm can be used to model these types of problems. The results show that the algorithm indeed can approximate this type of function, but it remains to be proven that the methods are useful when examples are taken from an actual expert, and applied in a real world scenario. Another finding is that the choice of Value Iteration as the algorithm for calculating policies turns out to be very time consuming which limits the amount of training and the scope of the possible problem.

  • Istar Terra, Ahmad
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Risk Mitigation for Human-Robot Collaboration Using Artificial Intelligence2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In human-robot collaborative (HRC) scenarios where humans and robots work together sharing the same workspace, there is a risk of potential hazard that may occur. In this work, an AI-based risk analysis solution has been developed to identify any condition that may harm a robot and its environment. The information from the risk analysis is used in a risk mitigation module to reduce the possibility of being in a hazardous situation. The goal is to develop safety for HRC scenarios using different AI algorithms and to check the possibilities of improving efficiency of the system without any compromise on the safety. This report presents risk mitigation strategies that were built on top of the robot’s control system and based on the ISO 15066 standard. Each of them used semantic information (scene graph) about the robot’s environment and changed the robot’s movement by scaling speed. The first implementation of risk mitigation strategy used Fuzzy Logic System. This system analyzed the riskiest object’s properties to adjust the speed of the robot accordingly. The second implementation used Reinforcement Learning and considered every object’s properties. Three networks (fully connected network, convolutional neural network, and hybrid network) were implemented to estimate the Qvalue function. Additionally, local and edge computation architecture wereimplemented to measure the computational performance on the real robot.

    Each model was evaluated by measuring the safety aspect and the performance of the robot in a simulated warehouse scenario. All risk mitigation modules were able to reduce the risk of potential hazard. The fuzzy logic system was able to increase the safety aspect with the least efficiency reduction. The reinforcement learning model had safer operation but showed a more compromised efficiency than the fuzzy logic system. Generally, the fuzzy logic system performed up to 28% faster than reinforcement learning but compromised up to 23% in terms of safety (mean risk speed value). In terms of computational performance, edge computation was performed faster than local computation. The bottleneck of the process was the scene graph generation which analyzed an image to produce information for safety analysis. It took approximately 15 seconds to run the scene graph generation on the robot’s CPU and 0.3 seconds on an edge device. The risk mitigation module can be selected depending on KPIs of the warehouse operation while the edge architecture must be implemented to achieve a realistic performance.

  • Papaioannou, Magdalini
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Design and analysis of different semantic SLAM algorithms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The goal of this thesis project was to improve trajectory estimation of a traditional SLAM framework using semantic information generated by a deep neural network. The first part of the project involved designing and implementing a semantic integration method, in order to semantically classify keypoints and 3D map points within the pipeline. In the second part of the project, multiple bundle adjustment modifications were designed and implemented. Finally, the different methods were evaluated on a widely used SLAM benchmark. The final, proposed method outperforms the baseline on most of the benchmark sequences.

  • Gomez-Torrent, Adrian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Micro and Nanosystems.
    Shah, Umer
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Micro and Nanosystems.
    Oberhammer, Joachim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Micro and Nanosystems.
    Silicon Micromachined Receiver Calibration Waveguide Switch for THz Frequencies2020Conference paper (Refereed)
  • Zografos, Dimitrios
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Matevosyan, Julia
    Eriksson, Robert
    Baldick, Ross
    Ghandari, Mehrdad
    KTH, Superseded Departments (pre-2005), Electrical Systems. KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Frequency Response Assessment: Parameter Identification of Simplified Governor Response Models Using Historic Event Data2020In: Cigre Science & Engineering, ISSN 2426-1335, Vol. 17, p. 150-167Article in journal (Refereed)
    Abstract [en]

    With a growing share of inverter-interfaced generation in modern power systems, synchronous inertia is declining. This leads to faster frequency drop after large generation trip events. During low inertia conditions, frequency containment reserves might not be sufficient to arrest frequency before it reaches the threshold for underfrequency load shedding. It is therefore be-coming increasingly important for system operators to be able to assess frequency response in near real time. In contrast to detailed models, simplified models offer short simulation times and their parameters can be accurately identified and adapted to changing system conditions in near real time. In this paper, the parameters of governor response models are identified by minimizing the er-ror residuals between the simulation models' and the actual sys-tem's measured active power response. This is accomplished by using historic event data from two system operators: the Electric Reliability Council Of Texas (ERCOT) and the Swedish Svenska kraftnät (Svk). Then, the respective frequency response models are simulated to assess frequency response. The results show that, despite their simplicity, the models provide a very good fit com-pared to the actual response. The models of ERCOT and Svk are examined; however, a similar approach can be employed to repre-sent the frequency response of other power systems.

  • Zhou, Linghui
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Vu, Minh Thành
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Oechtering, Tobias J.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Skoglund, Mikael
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Fundamental Limits for Biometric Identification Systems without Privacy Leakage2019In: Proceedings 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    Wewithout privacy leakage. Privacy-preserving biometric identifi- cation systems that involve helper data, secret keys and private keys are considered. The helper data are stored in a public database and can be used to support the user identification. The secret key is either stored in an encrypted database or handed to the user, which can be used for authentication. Since the helper data are public and releasing the biometric information invokes privacy issues, the public helper data can only leak a negligible amount of biometric information. To achieve this, we use private keys to mask the helper data such that the public helper data contain as little as possible information about the biometrics. Moreover, a two-stage extension is also studied, where the clustering method is used such that the search complexity in the identification phase can be reduced. identification

  • Zhou, Linghui
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Vu, Minh Thành
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Oechtering, Tobias J.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Polar Codes for Identification Systems2019In: Proceedings of the 12th International ITG Conference on Systems, Communications and Coding, 2019Conference paper (Refereed)
    Abstract [en]

    In this paper, we study compression and identifica- tion algorithms for the identification systems using polar codes. High dimensional feature vectors representing users are first compressed and then enrolled in a database. When an unknown enrolled user is observed, the noisy observation is compared with the entries in the database and the processing unit outputs an estimated user index. We develop three approaches based on polar codes and apply them to identification systems. This is the first time that identification system based on polar codes is studied. In particular, the identification mapping is challenging. The proposed methods provide a framework of applying polar codes to identification systems. The numerical evaluation results show that they results in complexity linearly depends on the number of users and low identification error rates as the sequence length increases.

  • Vu, Minh Thành
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Oechtering, Tobias J.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Skoglund, Mikael
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Hierarchical Identification with Pre-processing2019In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 66, no 1, p. 82-113Article in journal (Refereed)
    Abstract [en]

    We study a two-stage identification problem with pre-processing to enable efficient data retrieval and reconstruc- tion. In the enrollment phase, users’ data are stored into the database in two layers. In the identification phase an observer obtains an observation, which originates from an unknown user in the enrolled database through a memoryless channel. The observation is sent for processing in two stages. In the first stage, the observation is pre-processed, and the result is then used in combination with the stored first layer information in the database to output a list of compatible users to the second stage. Then the second step uses the information of users contained in the list from both layers and the original observation sequence to return the exact user identity and a corresponding reconstruction sequence. The rate-distortion regions are characterized for both discrete and Gaussian scenarios. Specifically, for a fixed list size and distortion level, the compression-identification trade-off in the Gaussian scenario results in three different operating cases characterized by three auxiliary functions. While the choice of the auxiliary random variable for the first layer information is essentially unchanged when the identification rate is varied, the second one is selected based on the dominant function within those three. Due to the presence of a mixture of discrete and continuous random variables, the proof for the Gaussian case is highly non-trivial, which makes a careful measure theoretic analysis necessary. In addition, we study a connection of the previous setting to a two observer identification and a related problem with a lower bound for the list size, where the latter is motivated from privacy concerns.

  • Vu, Minh Thành
    et al.
    KTH, School of Electrical Engineering (EES), Information Science and Engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Oechtering, Tobias J.
    KTH, School of Electrical Engineering (EES), Information Science and Engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Wiese, Moritz
    KTH, School of Electrical Engineering (EES), Information Science and Engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Skoglund, Mikael
    KTH, School of Electrical Engineering (EES), Information Science and Engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Successive refinement with cribbing and side information2017In: 54th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2016, IEEE, 2017, p. 70-77, article id 7852212Conference paper (Refereed)
    Abstract [en]

    In this work we consider the successive refinement with side information where decoders cooperate via a cribbing link. Rate-distortion regions are characterized for non-causal side information and causal side information under appropriate constraints. An example is provided to illustrate the rate-distortion region for non-causal side information.