kth.sePublications
Change search
Refine search result
1234567 1 - 50 of 17986
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. AAl Abdulsalam, Abdulrahman
    et al.
    Velupillai, Sumithra
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. King's College, London.
    Meystre, Stephane
    UtahBMI at SemEval-2016 Task 12: Extracting Temporal Information from Clinical Text2016In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), Association for Computational Linguistics , 2016, p. 1256-1262Conference paper (Refereed)
    Abstract [en]

    The 2016 Clinical TempEval continued the 2015 shared task on temporal information extraction with a new evaluation test set. Our team, UtahBMI, participated in all subtasks using machine learning approaches with ClearTK (LIBLINEAR), CRF++ and CRFsuite packages. Our experiments show that CRF-based classifiers yield, in general, higher recall for multi-word spans, while SVM-based classifiers are better at predicting correct attributes of TIMEX3. In addition, we show that an ensemble-based approach for TIMEX3 could yield improved results. Our team achieved competitive results in each subtask with an F1 75.4% for TIMEX3, F1 89.2% for EVENT, F1 84.4% for event relations with document time (DocTimeRel), and F1 51.1% for narrative container (CONTAINS) relations.

  • 2.
    Aalto, Erik
    KTH, School of Computer Science and Communication (CSC).
    Learning Playlist Representations for Automatic Playlist Generation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Spotify is currently the worlds leading music streaming ser-vice. As the leader in music streaming the task of providing listeners with music recommendations is vital for Spotify. Listening to playlists is a popular way of consuming music, but traditional recommender systems tend to fo-cus on suggesting songs, albums or artists rather than pro-viding consumers with playlists generated for their needs.

    This thesis presents a scalable and generalizeable approach to music recommendation that performs song selection for the problem of playlist generation. The approach selects tracks related to a playlist theme by finding the charac-terizing variance for a seed playlist and projects candidate songs into the corresponding subspace. Quantitative re-sults shows that the model outperforms a baseline which is taking the full variance into account. By qualitative results the model is also shown to outperform professionally curated playlists in some cases.

    Download full text (pdf)
    fulltext
  • 3.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Intention recognition in human machine collaborative systems2007Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    Robot systems have been used extensively during the last decades to provide automation solutions in a number of areas. The majority of the currently deployed automation systems are limited in that the tasks they can solve are required to be repetitive and predicable. One reason for this is the inability of today’s robot systems to understand and reason about the world. Therefore the robotics and artificial intelligence research communities have made significant research efforts to produce more intelligent machines. Although significant progress has been made towards achieving robots that can interact in a human environment there is currently no system that comes close to achieving the reasoning capabilities of humans.

    In order to reduce the complexity of the problem some researchers have proposed an alternative to creating fully autonomous robots capable of operating in human environments. The proposed alternative is to allow fusion of human and machine capabilities. For example, using teleoperation a human can operate at a remote site, which may not be accessible for the operator for a number of reasons, by issuing commands to a remote agent that will act as an extension of the operator’s body.

    Segmentation and recognition of operator generated motions can be used to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online in order to improve the performance in terms of execution time and overall precision. Acquiring, representing and modeling human skills are key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several sub-tasks in order to provide manageable modeling.

    This thesis is focused on two aspects of human-machine collaborative systems. Classfication of an operator’s motion into a predefined state of a manipulation task and assistance during a manipulation task based on virtual fixtures. The particular applications considered consists of manipulation tasks where a human operator controls a robotic manipulator in a cooperative or teleoperative mode.

    A method for online task tracking using adaptive virtual fixtures is presented. Rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory sub-task) is estimated and used to automatically adjusts the compliance of a virtual fixture, thus providing an online decision of how to fixture the movement.

    A layered hidden Markov model is used to model human skills. A gestem classifier that classifies the operator’s motions into basic action-primitives, or gestemes, is evaluated. The gestem classifiers are then used in a layered hidden Markov model to model a simulated teleoperated task. The classification performance is evaluated with respect to noise, number of gestemes, type of the hidden Markov model and the available number of training sequences. The layered hidden Markov model is applied to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the layered hidden Markov model is suitable for modeling teleoperative trajectory-tracking tasks and that the layered hidden Markov model is robust with respect to misclassifications in the underlying gestem classifiers.

    Download full text (pdf)
    FULLTEXT01
  • 4.
    Aarno, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sommerfeld, Johan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pugeault, Nicolas
    Kalkan, Sinan
    Woergoetter, Florentin
    Krüger, Norbert
    Early reactive grasping with second order 3D feature relations2008In: Recent Progress In Robotics: Viable Robotic Service To Human / [ed] Lee, S; Suh, IH; Kim, MS, 2008, Vol. 370, p. 91-105Conference paper (Refereed)
    Abstract [en]

    One of the main challenges in the field of robotics is to make robots ubiquitous. To intelligently interact with the world, such robots need to understand the environment and situations around them and react appropriately, they need context-awareness. But how to equip robots with capabilities of gathering and interpreting the necessary information for novel tasks through interaction with the environment and by providing some minimal knowledge in advance? This has been a longterm question and one of the main drives in the field of cognitive system development. The main idea behind the work presented in this paper is that the robot should, like a human infant, learn about objects by interacting with them, forming representations of the objects and their categories that are grounded in its embodiment. For this purpose, we study an early learning of object grasping process where the agent, based on a set of innate reflexes and knowledge about its embodiment. We stress out that this is not the work on grasping, it is a system that interacts with the environment based on relations of 3D visual features generated trough a stereo vision system. We show how geometry, appearance and spatial relations between the features can guide early reactive grasping which can later on be used in a more purposive manner when interacting with the environment.

  • 5.
    Aasberg, Freddy
    KTH, School of Electrical Engineering and Computer Science (EECS).
    HypervisorLang: Attack Simulations of the OpenStack Nova Compute Node2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud services are growing in popularity and the global public cloud services are forecasted to increase by 17% in 2020[1]. The popularity of cloud services is due to the improved resource allocation for providers and simplicity of use for the customer. Due to the increasing popularity of cloud services and its increased use by companies, the security assessment of the services is strategically becoming more critical. Assessing the security of a cloud system can be problematic because of its complexity since the systems are composed of many different technologies. One way of simplifying the security assessment is attack simulations, covering cyberattacks of the investigated system. This thesis will make use of Meta Attack language (MAL) to create the Domain- Specific Language (DLS) HypervisorLang that models the virtualisation layer in an OpenStack Nova setup. The result of this thesis is a proposed DSL HypervisorLang which uses attack simulation to model hostile usage of the service and defences to evade those. The hostile usage covers attacks such as a denial of services, buffer overflows and out-of-bound-read and are sourced via known vulnerabilities. To implement the main components of the Nova module into HypervisorLang, literature studies where performed and included components in Nova together with threat modelling. Evaluating the correctness of HypervisorLang was performed by implementing test cases to display the different attack steps included in the model. However, the results also show that some limitations of the evaluations have been found and are proposed for further research. 

    Download full text (pdf)
    fulltext
  • 6.
    Aasberg Pipirs, Freddy
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Svensson, Patrik
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Tenancy Model Selection Guidelines2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Software as a Service (SaaS) is a subset of cloud services where a vendor provides software as a service to customers. The SaaS application is installed on the SaaS provider’s servers, and is often accessed via the web browser. In the context of SaaS, a customer is called tenant, which often is an organization that is accessing the SaaS application, but it could also be a single individual. A SaaS application can be classified into tenancy models. A tenancy model describes how a tenant’s data is mapped to the storage on the server-side of the SaaS application.By doing a research, the authors have drawn the conclusion that there is a lack of guidance for selecting tenancy models. The purpose of this thesis is to provide guidance for selecting tenancy models. The short-term-goal is to create a tenancy selection guide. The long-term-goal is to provide researchers and students with research material. This thesis provides a guidance model for selection of tenancy models. The model is called Tenancy Model Selection Guidelines (TMSG).TMSG was evaluated by interviewing two professionals from the software industry. The criteria used for evaluating TMSG were Interviewee credibility, Syntactic correctness, Semantic correctness, Usefulness and Model flexibility. In the interviews, both of the interviewees said that TMSG was in need of further refinements. Still they were positive to the achieved result.

    Download full text (pdf)
    fulltext
  • 7. Aasi, Parisa
    et al.
    Nunes, Ivan
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Rusu, Lazar
    Hodosi, Georg
    Does Organizational Culture Matter in IT Outsourcing Relationships?2015In: 2015 48TH HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES (HICSS), IEEE Computer Society, 2015, p. 4691-4699Conference paper (Refereed)
    Abstract [en]

    IT Outsourcing (ITO) is used widely by Multinational Companies (MNCs) as a sourcing strategy today. ITO relationship between service buyer and provider then becomes a crucial issue in achieving expected objectives. This research sheds light on the influence of organizational culture (OC) of the buyer company on its ITO relationship with the provider. More specifically, the influence that OC can have on four significant dimensions of trust, cooperation, communication and commitment in ITO is studied through a qualitative analysis. IT managers of six MNCs were interviewed which exposed the connection between OC and ITO relationship factors. An open communication culture, speed of adaption to change, receiving innovative solutions, flat or hierarchical structures and responsibility degree appeared as the most visible differences between OCs of MNCs influencing ITO relationships. The results can be used for improving the ITO by considering the influence of OC to gain more benefits from outsourcing.

  • 8. Abaglo, A. J.
    et al.
    Bonalda, C.
    Pertusa, Emeline
    KTH.
    Environmental Digital Model: Integration of BIM into environmental building simulations2017In: CISBAT 2017 International ConferenceFuture Buildings & Districts – Energy Efficiency from Nano to Urban Scale, Elsevier, 2017, Vol. 122, p. 1063-1068Conference paper (Refereed)
    Abstract [en]

    The digital model and the BIM are creating a revolution with a transition from 2D to 3D models. However, environmental professions carry out building simulations with a wide range of software with little or no communication between them. This often leads to the realization of several 3D models and therefore a significant loss of time, as well as possible inconsistencies of geometrical information. Our research aims to use the interoperability potential offered by BIM-friendly software to develop gateways to optimize the modeling phase and improve the restitution of the studies through visual integration in a digital mockup.

  • 9.
    Abbas, Haider
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Options-Based Security-Oriented Framework for Addressing Uncerainty Issues in IT Security2010Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Continuous development and innovation in Information Technology introduces novel configuration methods, software development tools and hardware components. This steady state of flux is very desirable as it improves productivity and the overall quality of life in societies. However, the same phenomenon also gives rise to unseen threats, vulnerabilities and security concerns that are becoming more critical with the passage of time. As an implication, technological progress strongly impacts organizations’ existing information security methods, policies and techniques, making obsolete existing security measures and mandating reevaluation, which results in an uncertain IT infrastructure. In order to address these critical concerns, an options-based reasoning borrowed from corporate finance is proposed and adapted for evaluation of security architecture and decision- making to handle them at organizational level. Options theory has provided significant guidance for uncertainty management in several domains, such as Oil & Gas, government R&D and IT security investment projects. We have applied options valuation technique in a different context to formalize optimal solutions in uncertain situations for three specific and identified uncertainty issues in IT security. In the research process, we formulated an adaptation model for expressing options theory in terms useful for IT security which provided knowledge to formulate and propose a framework for addressing uncertainty issues in information security. To validate the efficacy of this proposed framework, we have applied this approach to the SHS (Spridnings- och Hämtningssystem) and ESAM (E-Society) systems used in Sweden. As an ultimate objective of this research, we intend to develop a solution that is amenable to automation for the three main problem areas caused by technological uncertainty in information security: i) dynamically changing security requirements, ii) externalities caused by a security system, iii) obsoleteness of evaluation. The framework is general and capable of dealing with other uncertainty management issues and their solutions, but in this work we primarily deal with the three aforementioned uncertainty problems. The thesis presents an in-depth background and analysis study for a proposed options-based security-oriented framework with case studies for SHS and ESAM systems. It has also been assured that the framework formulation follows the guidelines from industry best practices criteria/metrics. We have also proposed how the whole process can be automated as the next step in development.

  • 10.
    Abbas, Haider
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Threats and Security Measures Involved in VoIP-Telephony2006Independent thesis Advanced level (degree of Master (One Year)), 30 credits / 45 HE creditsStudent thesis
  • 11.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Magnusson, Christer
    Department of Computer and System Sciences, Stockholm University, Sweden.
    Yngström, Louise
    Department of Computer and System Sciences, Stockholm University, Sweden.
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Addressing Dynamic Issues in Information Security Management2011In: Information Management & Computer Security, ISSN 0968-5227, E-ISSN 1758-5805, Vol. 19, no 1, p. 5-24Article in journal (Refereed)
    Abstract [en]

    Purpose – The paper addresses three main problems resulting from uncertainty in information securitymanagement: i) dynamically changing security requirements of an organization ii) externalities caused by a securitysystem and iii) obsolete evaluation of security concerns.

    Design/methodology/approach – In order to address these critical concerns, a framework based on optionsreasoning borrowed from corporate finance is proposed and adapted to evaluation of security architecture anddecision-making for handling these issues at organizational level. The adaptation as a methodology is demonstrated by a large case study validating its efficacy.

    Findings – The paper shows through three examples that it is possible to have a coherent methodology, buildingon options theory to deal with uncertainty issues in information security at an organizational level.

    Practical implications – To validate the efficacy of the methodology proposed in this paper, it was applied tothe SHS (Spridnings- och Hämtningssystem: Dissemination and Retrieval System) system. The paper introduces themethodology, presents its application to the SHS system in detail and compares it to the current practice.

    Originality/value – This research is relevant to information security management in organizations, particularlyissues on changing requirements and evaluation in uncertain circumstances created by progress in technology.

  • 12.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Magnusson, Christer
    Department of Computer and System Sciences, Stockholm University, Sweden.
    Yngström, Louise
    Department of Computer and System Sciences, Stockholm University, Sweden.
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Architectural Description of an Automated System for Uncertainty Issues Management in Information Security2010In: International Journal of Computer Science and Information Security, ISSN 1947-5500, Vol. 8, no 3, p. 89-67Article in journal (Refereed)
  • 13.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Magnusson, Christer
    Yngström, Louise
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    A Structured Approach for Internalizing Externalities Caused by IT Security Mechanisms2010In: IEEE ETCS 2010, Wuhan, China, 2010, p. 149-153Conference paper (Refereed)
    Abstract [en]

    Organizations relying on Information Technology for their business processes have to employ various Security Mechanisms (Authentication, Authorization, Hashing, Encryption etc) to achieve their organizational security objectives of data confidentiality, integrity and availability. These security mechanisms except from their intended role of increased security level for this organization may also affect other systems outside the organization in a positive or negative manner called externalities. Externalities emerge in several ways i.e. direct cost, direct benefit, indirect cost and indirect benefit. Organizations barely consider positive externalities although they can be beneficial and the negative externalities that could create vulnerabilities are simply ignored. In this paper, we will present an infrastructure to streamline information security externalities that appear dynamically for an organization

  • 14.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Magnusson, Christer
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Yngström, Louise
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Analyzing IT Security Evaluation needs for Developing Countries2009In: IPID Annual Workshop 2009, Orebro, Sweden, 2009Conference paper (Other academic)
  • 15.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Sundkvist, Stefan
    KTH, School of Information and Communication Technology (ICT).
    Increasing the Performance of Crab Linux Router Simulation Package Using XEN2006In: IEEE International Conference on Industrial and Information Systems, Kandy, Sri Lanka, 2006, p. 459-462Conference paper (Refereed)
    Abstract [en]

    Nowadays hardware components are very expensive, especially if the prime purpose is to perform some routing related lab exercises. Physically connected network resources are required to get the desired results. Configuration of network resources in a lab exercise consumes much time of the students and scientists. The router simulation package Crab(1), based on KnoppW, Quagga' and User Mode Linux (UML) is designed for the students to facilitate them in performing lab exercises on a standalone computer where no real network equipment is needed. In addition to that it provides the facility of connection with the real network equipments. Crab also handles the pre configuration of different parts of the simulated networks like automatic IT addressing etc. This paper will describe the performance enhancing of Crab by replacing User Mode Linux virtual machine with XEN capable of providing ten virtual sessions concurrently using a standalone computer.

  • 16.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Yngström, Louise
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Adaptability Infrastructure for Bridging IT Security Evaluation and Options Theory2009In: ACM- IEEE SIN 2009 International Conference on Security of Information and Networks, North Cyprus, 2009Conference paper (Refereed)
    Abstract [en]

    The constantly rising threats in IT infrastructure raise many concerns for an organization, altering security requirements according to dynamically changing environment, need of midcourse decision management and deliberate evaluation of security measures are most striking. Common Criteria for IT security evaluation has long been considered to be victimized by uncertain IT infrastructure and considered resource hungry, complex and time consuming process. Considering this aspect we have continued our research quest for analyzing the opportunities to empower IT security evaluation process using Real Options thinking. The focus of our research is not only the applicability of real options analysis in IT security evaluation but also observing its implications in various domains including IT security investments and risk management. We find it motivating and worth doing to use an established method from corporate finance i.e. real options and utilize its rule of thumb technique as a road map to counter uncertainty issues for evaluation of IT products. We believe employing options theory in security evaluation will provide the intended benefits. i.e. i) manage dynamically changing security requirements ii) accelerating evaluation process iii) midcourse decision management. Having all the capabilities of effective uncertainty management, options theory follows work procedures based on mathematical calculations quite different from information security work processes. In this paper, we will address the diversities between the work processes of security evaluation and real options analysis. We present an adaptability infrastructure to bridge the gap and make them coherent with each other. This liaison will transform real options concepts into a compatible mode that provides grounds to target IT security evaluation and common criteria issues. We will address ESAM system as an example for illustrations and applicability of the concepts.

  • 17.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Yngström, Louise
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Empowering Security Evaluation of IT Products with Options Theory2009In: 30th IEEE Symposium on Security & Privacy, Oakland, USA, 2009Conference paper (Refereed)
  • 18.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Yngström, Louise
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Option Based Evaluation: Security Evaluation of IT Products Based on Options Theory2009In: IEEE  ECBS-EERC 2009, New York: IEEE , 2009, p. 134-141Conference paper (Refereed)
    Abstract [en]

    Reliability of IT systems and infrastructure is a critical need for organizations to trust their business processes. This makes security evaluation of IT systems a prime concern for these organizations. Common Criteria is an elaborate, globally accepted security evaluation process that fulfills this need. However CC rigidly follows the initial specification and security threats and takes too long to evaluate and as such is also very expensive. Rapid development in technology and with it the new security threats further aggravates the long evaluation time problem of CC to the extent that by the time a CC evaluation is done, it may no longer be valid because new security threats have emerged that have not been factored in. To address these problems, we propose a novel Option Based Evaluation methodology for security of IT systems that can also be considered as an enhancement to the CC process. The objective is to address uncertainty issues in IT environment and speed up the slow CC based evaluation processes. OBE will follow incremental evaluation model and address the following main concerns based on options theory i.e. i) managing dynamic security requirement with mid-course decision management ii) devising evaluation as an improvement process iii) reducing cost and time for evaluation of an IT product.

  • 19.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Yngström, Louise
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    ROA Based Agile Security Evaluation of IT Products for Developing Countries2009In: IPID 4th Annual Conference 2009, London, UK, 2009Conference paper (Other academic)
  • 20.
    Abbas, Haider
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Yngström, Louise
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Security Evaluation of IT Products: Bridging the Gap between Common Criteria (CC) and Real Option Thinking2008In: WCECS 2008: WORLD CONGRESS ON ENGINEERING AND COMPUTER SCIENCE, 2008, p. 530-533Conference paper (Refereed)
    Abstract [en]

    Information security has long been considered as a key concern for organizations benefiting from the electronic era. Rapid technological developments have been observed in the last decade which has given rise to novel security threats, making IT, an uncertain infrastructure. For this reason, the business organizations have an acute need to evaluate the security aspects of their IT infrastructure. Since many years, CC (Common Criteria) has been widely used and accepted for evaluating the security of IT products. It does not impose predefined security rules that a product should exhibit but a language for security evaluation. CC has certain advantages over ITSEC1, CTCPEC2 and TCSEC3 due to its ability to address all the three dimensions: a) it provides opportunity for users to specify their security requirements, b) an implementation guide for the developers and c) provides comprehensive criteria to evaluate the security requirements. Among the few notable shortcomings of CC is the amount of resources and a lot of time consumption. Another drawback of CC is that the security requirements in this uncertain IT environment must be defined before the project starts. ROA is a well known modern methodology used to make investment decisions for the projects under uncertainty. It is based on options theory that provides not only strategic flexibility but also helps to consider hidden options during uncertainty. ROA comes in two flavors: first for the financial option pricing and second for the more uncertain real world problems where the end results are not deterministic. Information security is one of the core areas under consideration where researchers are employing ROA to take security investment decisions. In this paper, we give a brief introduction of ROA and its use in various domains. We will evaluate the use of Real options based methods to enhance the Common Criteria evaluation methodology to manage the dynamic security requirement specification and reducing required time and resources. We will analyze the possibilities to overcome CC limitations from the perspective of the end user, developer and evaluator. We believe that with the ROA enhanced capabilities will potentially be able to stop and possibly reverse this trend and strengthen the CC usage with a more effective and responsive evaluation methodology.

  • 21.
    Abbas, Sahib
    KTH, School of Information and Communication Technology (ICT).
    Lösning till mobilitetsproblem samt tillgänglighet till hemsidan för Iraks ambassad2014Independent thesis Advanced level (degree of Master (Two Years)), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Att använda internet nuförtiden har blivit en del av vardagen. Det känns som att Internet har delat världen i många delar där varje del delar med sig information i många olika former som placeras i olika kategorier. Vi delar information på många olika sätt, men det snabbaste och lättaste sättet är att sprida information med internet.

    Tekniken kommer med nya ideer kontinuerligt och vi utvecklar nya metoder som gör det ännu lättare för oss människor att få in information som vi forsöker att nå via internet. Två av de mest kända sätten där man kan dela med sig information är hemsidor och med "native" applikationer.

    Jag utförde exjobbet på lrakiska ambassaden. Det är en statlig irakisk organisation som ligger pa Baldersgatan 6A Stockholm. Huvuduppgift på ambassaden är att hjälpa irakisk- ­och icke irakiska medborgare som är bosatta i Sverige med vissa uppgifter.

    Ambassaden har mycket information som de försöker att dela med sig så mycket som möjligt av via hemsidan så att de minskar av det strulet de har. Hemsidan som ambassaden hade var ostrukturerad, och hade en tråkig design som ledde till att användaren fick svårigbeter med att lätt hitta det man söker.

    Detta examensarbete presenterar min lösning av problemen till ambassaden. Den är att bygga en helt ny hemsida som är mobilanpassad, som har bättre design ocb är mer strukturerad hemsida än den gamla hemsidan de hade. På så sätt så blir det mycket lättare för användaren att använda hemsidan. Samtidigt löser den mobilitetsproblemet, eftersom ambassaden började med att utveckla en native app till Iphone men projektet avbröts halvvägs för att det kostade för mycket och man insåg att det skulle uppstå ännu mer kostnader for att utveckla native appar till android ocb övriga operativ systemen. Detta examensarbete beskriver också hur man skulle kunna utveckla en mobilanpassad hemsida, vilka metoder ocb modeller som jag har använt mig av till utvecklingen av hemsidan, samt resultaten jag hade fått av de metoder som användes till utveckling av detta projekt.

    Download full text (pdf)
    fulltext
  • 22.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Al-Shishtawy, Ahmad
    RISE SICS, Stockholm, Sweden.
    Girdzijauskas, Sarunas
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. RISE SICS, Stockholm, Sweden..
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks2018Conference paper (Refereed)
    Abstract [en]

    Short-term traffic prediction allows Intelligent Transport Systems to proactively respond to events before they happen. With the rapid increase in the amount, quality, and detail of traffic data, new techniques are required that can exploit the information in the data in order to provide better results while being able to scale and cope with increasing amounts of data and growing cities. We propose and compare three models for short-term road traffic density prediction based on Long Short-Term Memory (LSTM) neural networks. We have trained the models using real traffic data collected by Motorway Control System in Stockholm that monitors highways and collects flow and speed data per lane every minute from radar sensors. In order to deal with the challenge of scale and to improve prediction accuracy, we propose to partition the road network into road stretches and junctions, and to model each of the partitions with one or more LSTM neural networks. Our evaluation results show that partitioning of roads improves the prediction accuracy by reducing the root mean square error by the factor of 5. We show that we can reduce the complexity of LSTM network by limiting the number of input sensors, on average to 35% of the original number, without compromising the prediction accuracy.

  • 23.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Kalavri, Vasiliki
    Systems Group, ETH, Zurich, Switzerland.
    Carbone, Paris
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Streaming Graph Partitioning: An Experimental Study2018In: Proceedings of the VLDB Endowment, E-ISSN 2150-8097, Vol. 11, no 11, p. 1590-1603Article in journal (Refereed)
    Abstract [en]

    Graph partitioning is an essential yet challenging task for massive graph analysis in distributed computing. Common graph partitioning methods scan the complete graph to obtain structural characteristics offline, before partitioning. However, the emerging need for low-latency, continuous graph analysis led to the development of online partitioning methods. Online methods ingest edges or vertices as a stream, making partitioning decisions on the fly based on partial knowledge of the graph. Prior studies have compared offline graph partitioning techniques across different systems. Yet, little effort has been put into investigating the characteristics of online graph partitioning strategies.

    In this work, we describe and categorize online graph partitioning techniques based on their assumptions, objectives and costs. Furthermore, we employ an experimental comparison across different applications and datasets, using a unified distributed runtime based on Apache Flink. Our experimental results showcase that model-dependent online partitioning techniques such as low-cut algorithms offer better performance for communication-intensive applications such as bulk synchronous iterative algorithms, albeit higher partitioning costs. Otherwise, model-agnostic techniques trade off data locality for lower partitioning costs and balanced workloads which is beneficial when executing data-parallel single-pass graph algorithms.

  • 24.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Sigurdsson, Thorsteinn Thorri
    KTH.
    Al-Shishtawy, Ahmad
    RISE Res Inst Sweden, Stockholm, Sweden..
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Evaluation of the Use of Streaming Graph Processing Algorithms for Road Congestion Detection2018In: 2018 IEEE INT CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, UBIQUITOUS COMPUTING & COMMUNICATIONS, BIG DATA & CLOUD COMPUTING, SOCIAL COMPUTING & NETWORKING, SUSTAINABLE COMPUTING & COMMUNICATIONS / [ed] Chen, JJ Yang, LT, IEEE COMPUTER SOC , 2018, p. 1017-1025Conference paper (Refereed)
    Abstract [en]

    Real-time road congestion detection allows improving traffic safety and route planning. In this work, we propose to use streaming graph processing algorithms for road congestion detection and evaluate their accuracy and performance. We represent road infrastructure sensors in the form of a directed weighted graph and adapt the Connected Components algorithm and some existing graph processing algorithms, originally used for community detection in social network graphs, for the task of road congestion detection. In our approach, we detect Connected Components or communities of sensors with similarly weighted edges that reflect different states in the traffic, e.g., free flow or congested state, in regions covered by detected sensor groups. We have adapted and implemented the Connected Components and community detection algorithms for detecting groups in the weighted sensor graphs in batch and streaming manner. We evaluate our approach by building and processing the road infrastructure sensor graph for Stockholm's highways using real-world data from the Motorway Control System operated by the Swedish traffic authority. Our results indicate that the Connected Components and DenGraph community detection algorithms can detect congestion with accuracy up to approximate to 94% for Connected Components and up to approximate to 88% for DenGraph. The Louvain Modularity algorithm for community detection fails to detect congestion regions for sparsely connected graphs, representing roads that we have considered in this study. The Hierarchical Clustering algorithm using speed and density readings is able to detect congestion without details, such as shockwaves.

  • 25.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Sottovia, Paolo
    Huawei Munich Research Centre, Munich, Germany.
    Hassan, Mohamad Al Hajj
    Huawei Munich Research Centre, Munich, Germany.
    Foroni, Daniele
    Huawei Munich Research Centre, Munich, Germany.
    Bortoli, Stefano
    Huawei Munich Research Centre, Munich, Germany.
    Real-time Traffic Jam Detection and Congestion Reduction Using Streaming Graph Analytics2020In: 2020 IEEE International Conference on Big Data (Big Data), Institute of Electrical and Electronics Engineers (IEEE) , 2020, p. 3109-3118Conference paper (Refereed)
    Abstract [en]

    Traffic congestion is a problem in day to day life, especially in big cities. Various traffic control infrastructure systems have been deployed to monitor and improve the flow of traffic across cities. Real-time congestion detection can serve for many useful purposes that include sending warnings to drivers approaching the congested area and daily route planning. Most of the existing congestion detection solutions combine historical data with continuous sensor readings and rely on data collected from multiple sensors deployed on the road, measuring the speed of vehicles. While in our work we present a framework that works in a pure streaming setting where historic data is not available before processing. The traffic data streams, possibly unbounded, arrive in real-time. Moreover, the data used in our case is collected only from sensors placed on the intersections of the road. Therefore, we investigate in creating a real-time congestion detection and reduction solution, that works on traffic streams without any prior knowledge. The goal of our work is 1) to detect traffic jams in real-time, and 2) to reduce the congestion in the traffic jam areas.In this work, we present a real-time traffic jam detection and congestion reduction framework: 1) We propose a directed weighted graph representation of the traffic infrastructure network for capturing dependencies between sensor data to measure traffic congestion; 2) We present online traffic jam detection and congestion reduction techniques built on a modern stream processing system, i.e., Apache Flink; 3) We develop dynamic traffic light policies for controlling traffic in congested areas to reduce the travel time of vehicles. Our experimental results indicate that we are able to detect traffic jams in real-time and deploy new traffic light policies which result in 27% less travel time at the best and 8% less travel time on average compared to the travel time with default traffic light policies. Our scalability results show that our system is able to handle high-intensity streaming data with high throughput and low latency.

  • 26. Abbasi, A. G.
    et al.
    Muftic, Sead
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Schmölzer, Gernot
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    CryptoNET: Secure federation protocol and authorization policies for SMI2009In: Post-Proceedings of the 4th International Conference on Risks and Security of Internet and Systems, CRiSIS 2009, 2009, p. 19-25Conference paper (Refereed)
    Abstract [en]

    The paper describes a protocol for Secure E-Mail Infrastructure for establishing trust between different domains in order to protect mail servers from spam messages. The protocol uses messages for trusted interactions between intra and inter E-mail domain components, Secure E-mail (SEM) servers and Secure Mail Infrastructure (SMI) servers. In addition, the protocol validates E-mail addresses thus guaranteeing to the recipient that the E-mail is coming from a trusted domain. We also use XACML-based authorization policies at the sending and receiving servers, enforced by associated Policy Enforcement Point (PEP) servers at SEM servers, in order to provide a complete protection against spam.

  • 27.
    Abbasi, Abdul Ghafoor
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    CryptoNET: Generic Security Framework for Cloud Computing Environments2011Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The area of this research is security in distributed environment such as cloud computing and network applications. Specific focus was design and implementation of high assurance network environment, comprising various secure and security-enhanced applications. “High Assurance” means that

    -               our system is guaranteed to be secure,

    -               it is verifiable to provide the complete set of security services,

    -               we prove that it always functions correctly, and

    -               we justify our claim that it can not be compromised without user neglect and/or consent.

     

    We do not know of any equivalent research results or even commercial security systems with such properties. Based on that, we claim several significant research and also development contributions to the state–of–art of computer networks security.

    In the last two decades there were many activities and contributions to protect data, messages and other resources in computer networks, to provide privacy of users, reliability, availability and integrity of resources, and to provide other security properties for network environments and applications. Governments, international organizations, private companies and individuals are investing a great deal of time, efforts and budgets to install and use various security products and solutions. However, in spite of all these needs, activities, on-going efforts, and all current solutions, it is general belief that the security in today networks and applications is not adequate.

    At the moment there are two general approaches to network application’s security. One approach is to enforce isolation of users, network resources, and applications. In this category we have solutions like firewalls, intrusion–detection systems, port scanners, spam filters, virus detection and elimination tools, etc. The goal is to protect resources and applications by isolation after their installation in the operational environment. The second approach is to apply methodology, tools and security solutions already in the process of creating network applications. This approach includes methodologies for secure software design, ready–made security modules and libraries, rules for software development process, and formal and strict testing procedures. The goal is to create secure applications even before their operational deployment. Current experience clearly shows that both approaches failed to provide an adequate level of security, where users would be guaranteed to deploy and use secure, reliable and trusted network applications.

    Therefore, in the current situation, it is obvious that a new approach and a new thinking towards creating strongly protected and guaranteed secure network environments and applications are needed. Therefore, in our research we have taken an approach completely different from the two mentioned above. Our first principle is to use cryptographic protection of all application resources. Based on this principle, in our system data in local files and database tables are encrypted, messages and control parameters are encrypted, and even software modules are encrypted. The principle is that if all resources of an application are always encrypted, i.e. “enveloped in a cryptographic shield”, then

    -               its software modules are not vulnerable to malware and viruses,

    -               its data are not vulnerable to illegal reading and theft,

    -               all messages exchanged in a networking environment are strongly protected, and

    -               all other resources of an application are also strongly protected.

     

    Thus, we strongly protect applications and their resources before they are installed, after they are deployed, and also all the time during their use.

    Furthermore, our methodology to create such systems and to apply total cryptographic protection was based on the design of security components in the form of generic security objects. First, each of those objects – data object or functional object, is itself encrypted. If an object is a data object, representing a file, database table, communication message, etc., its encryption means that its data are protected all the time. If an object is a functional object, like cryptographic mechanisms, encapsulation module, etc., this principle means that its code cannot be damaged by malware. Protected functional objects are decrypted only on the fly, before being loaded into main memory for execution. Each of our objects is complete in terms of its content (data objects) and its functionality (functional objects), each supports multiple functional alternatives, they all provide transparent handling of security credentials and management of security attributes, and they are easy to integrate with individual applications. In addition, each object is designed and implemented using well-established security standards and technologies, so the complete system, created as a combination of those objects, is itself compliant with security standards and, therefore, interoperable with exiting security systems.

    By applying our methodology, we first designed enabling components for our security system. They are collections of simple and composite objects that also mutually interact in order to provide various security services. The enabling components of our system are:  Security Provider, Security Protocols, Generic Security Server, Security SDKs, and Secure Execution Environment. They are all mainly engine components of our security system and they provide the same set of cryptographic and network security services to all other security–enhanced applications.

    Furthermore, for our individual security objects and also for larger security systems, in order to prove their structural and functional correctness, we applied deductive scheme for verification and validation of security systems. We used the following principle: “if individual objects are verified and proven to be secure, if their instantiation, combination and operations are secure, and if protocols between them are secure, then the complete system, created from such objects, is also verifiably secure”. Data and attributes of each object are protected and secure, and they can only be accessed by authenticated and authorized users in a secure way. This means that structural security properties of objects, upon their installation, can be verified. In addition, each object is maintained and manipulated within our secure environment so each object is protected and secure in all its states, even after its closing state, because the original objects are encrypted and their data and states stored in a database or in files are also protected.

    Formal validation of our approach and our methodology is performed using Threat Model. We analyzed our generic security objects individually and identified various potential threats for their data, attributes, actions, and various states. We also evaluated behavior of each object against potential threats and established that our approach provides better protection than some alternative solutions against various threats mentioned. In addition, we applied threat model to our composite generic security objects and secure network applications and we proved that deductive approach provides better methodology for designing and developing secure network applications. We also quantitatively evaluated the performance of our generic security objects and found that the system developed using our methodology performs cryptographic functions efficiently.

    We have also solved some additional important aspects required for the full scope of security services for network applications and cloud environment: manipulation and management of cryptographic keys, execution of encrypted software, and even secure and controlled collaboration of our encrypted applications in cloud computing environments. During our research we have created the set of development tools and also a development methodology which can be used to create cryptographically protected applications. The same resources and tools are also used as a run–time supporting environment for execution of our secure applications. Such total cryptographic protection system for design, development and run–time of secure network applications we call CryptoNET system. CrytpoNET security system is structured in the form of components categorized in three groups: Integrated Secure Workstation, Secure Application Servers, and Security Management Infrastructure Servers. Furthermore, our enabling components provide the same set of security services to all components of the CryptoNET system.

    Integrated Secure Workstation is designed and implemented in the form of a collaborative secure environment for users. It protects local IT resources, messages and operations for multiple applications. It comprises four most commonly used PC applications as client components: Secure Station Manager (equivalent to Windows Explorer), Secure E-Mail Client, Secure Web Browser, and Secure Documents Manager. These four client components for their security extensions use functions and credentials of the enabling components in order to provide standard security services (authentication, confidentiality, integrity and access control) and also additional, extended security services, such as transparent handling of certificates, use of smart cards, Strong Authentication protocol, Security Assertion Markup Language (SAML) based Single-Sign-On protocol, secure sessions, and other security functions.

    Secure Application Servers are components of our secure network applications: Secure E-Mail Server, Secure Web Server, Secure Library Server, and Secure Software Distribution Server. These servers provide application-specific services to client components. Some of the common security services provided by Secure Application Servers to client components are Single-Sign-On protocol, secure communication, and user authorization. In our system application servers are installed in a domain but it can be installed in a cloud environment as services. Secure Application Servers are designed and implemented using the concept and implementation of the Generic Security Server. It provides extended security functions using our engine components. So by adopting this approach, the same sets of security services are available to each application server.

    Security Management Infrastructure Servers provide domain level and infrastructure level services to the components of the CryptoNET architecture. They are standard security servers, known as cloud security infrastructure, deployed as services in our domain level could environment.

    CryptoNET system is complete in terms of functions and security services that it provides. It is internally integrated, so that the same cryptographic engines are used by all applications. And finally, it is completely transparent to users – it applies its security services without expecting any special interventions by users. In this thesis, we developed and evaluated secure network applications of our CryptoNET system and applied Threat Model to their validation and analysis. We found that deductive scheme of using our generic security objects is effective for verification and testing of secure, protected and verifiable secure network applications.

    Based on all these theoretical research and practical development results, we believe that our CryptoNET system is completely and verifiably secure and, therefore, represents a significant contribution to the current state-of-the-art of computer network security.

    Download full text (pdf)
    FULLTEXT01
  • 28.
    Abbasi, Abdul Ghafoor
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Machine Elements.
    Muftic, Sead
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    CryptoNET: Security Management Protocols2010In: ADVANCES IN DATA NETWORKS, COMMUNICATIONS, COMPUTERS / [ed] Mastorakis, NE; Mladenov, V, ATHENS: WORLD SCIENTIFIC AND ENGINEERING ACAD AND SOC , 2010, p. 15-20Conference paper (Refereed)
    Abstract [en]

    In this paper we describe several network security protocols used by various components of CryptoNET architecture. The protocols are based on the concept of generic security objects and on well-established security standards and technologies. Distinctive features of our security protocols are: (1) they are complete in terms of their functionality, (2) they are easy to integrate with applications, (3) they transparently handle security credentials and protocol-specific attributes using FIPS 201 (PIV) smart cards, and (4) they are based on generic security objects. These protocols are: remote user authentication protocol, single-sign-on protocol, SAML authorization protocol, and secure sessions protocol. Security protocols use our Security Provider as a collection of cryptographic engines implemented either in software or using FIPS 201 (NV) smart cards. It also manages protocols' attributes using security applets stored in Ply smart card.

  • 29.
    Abbasi, Abdul Ghafoor
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Communication Systems, CoS.
    Muftic, Sead
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Communication Systems, CoS.
    Hotamov, I.
    Web contents protection, secure execution and authorized distribution2010In: Proceedings - 5th International Multi-Conference on Computing in the Global Information Technology, ICCGI 2010, 2010, p. 157-162Conference paper (Refereed)
    Abstract [en]

    This paper describes the design and implementation of a comprehensive system for protection of Web contents. In this design, new security components and extended security features are introduced in order to protect Web contents ageist various Web attacks. Components and extended security features are: protection of Web pages using strong encryption techniques, encapsulation of Web contents and resources in PKCS#7, extended secure execution environment for Java Web Server, eXtensible Access Control Markup Language (XACML) based authorization policies, and secure Web proxy. Design and implementation of our system is based on the concepts of generic security objects and component-based architecture that makes it compatible with exiting Web infrastructures without any modification.

  • 30.
    Abbasi, Azad Ismail
    KTH, School of Computer Science and Communication (CSC).
    Coffeepot for Masochists: A Study in User-Centered System Design2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master thesis is carried out in the field of “Human-Computer interaction”, more specifically the area “User-centered system design”. The focus has been on “usability” and useful graphical user interfaces. Current theories and definitions in the field have been considered. Literature studies contain well known authors and organisations in domains mentioned above; Jakob Nielsen, Donald A Norman and International Organization for Standardization ISO to mention some.

     Another source for this work from which the theories and way of working have been used is the book “User-Centered System Design” written by Jan Gulliksen and Bengt Göransson.

     The work started with a literature study followed by looking at methods to use. The next step was to do task and user analysis which followed by the development phase. The user has been given a central role in this project and, just as recommended, also been involved through the whole cycle. A useful method to get feedback from users, in addition to interviews and workshops, has been the “Heuristic Evaluation”.

     The final result and conclusion shows that the user-centered system design is a powerful tool to adapt when designing and developing interactive user interface.

    Download full text (pdf)
    Azad Abbasi - Master Thesis
  • 31. Abbaszadeh Shahri, A.
    et al.
    Larsson, Stefan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Renkel, C.
    Artificial intelligence models to generate visualized bedrock level: a case study in Sweden2020In: Modeling Earth Systems and Environment, ISSN 2363-6203, E-ISSN 2363-6211, Vol. 6, no 3, p. 1509-1528Article in journal (Refereed)
    Abstract [en]

    Assessment of the spatial distribution of bedrock level (BL) as the lower boundary of soil layers is associated with many uncertainties. Increasing our knowledge about the spatial variability of BL through high resolution and more accurate predictive models is an important challenge for the design of safe and economical geostructures. In this paper, the efficiency and predictability of different artificial intelligence (AI)-based models in generating improved 3D spatial distributions of the BL for an area in Stockholm, Sweden, were explored. Multilayer percepterons, generalized feed-forward neural network (GFFN), radial based function, and support vector regression (SVR) were developed and compared to ordinary kriging geostatistical technique. Analysis of the improvement in progress using confusion matrixes showed that the GFFN and SVR provided closer results to realities. The ranking of performance accuracy using different statistical errors and precision/recall curves also demonstrated the superiority and robustness of the GFFN and SVR compared to the other models. The results indicated that in the absence of measured data the AI models are flexible and efficient tools in creating more accurate spatial 3D models. Analyses of confidence intervals and prediction intervals confirmed that the developed AI models can overcome the associated uncertainties and provide appropriate prediction at any point in the subsurface of the study area. 

  • 32. Abbaszadeh Shahri, A.
    et al.
    Shan, Chunling
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Larsson, Stefan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    A Novel Approach to Uncertainty Quantification in Groundwater Table Modeling by Automated Predictive Deep Learning2022In: Natural Resources Research, ISSN 1520-7439, E-ISSN 1573-8981, Vol. 31, no 3, p. 1351-1373Article in journal (Refereed)
    Abstract [en]

    Uncertainty quantification (UQ) is an important benchmark to assess the performance of artificial intelligence (AI) and particularly deep learning ensembled-based models. However, the ability for UQ using current AI-based methods is not only limited in terms of computational resources but it also requires changes to topology and optimization processes, as well as multiple performances to monitor model instabilities. From both geo-engineering and societal perspectives, a predictive groundwater table (GWT) model presents an important challenge, where a lack of UQ limits the validity of findings and may undermine science-based decisions. To overcome and address these limitations, a novel ensemble, an automated random deactivating connective weights approach (ARDCW), is presented and applied to retrieved geographical locations of GWT data from a geo-engineering project in Stockholm, Sweden. In this approach, the UQ was achieved via a combination of several derived ensembles from a fixed optimum topology subjected to randomly switched off weights, which allow predictability with one forward pass. The process was developed and programmed to provide trackable performance in a specific task and access to a wide variety of different internal characteristics and libraries. A comparison of performance with Monte Carlo dropout and quantile regression using computer vision and control task metrics showed significant progress in the ARDCW. This approach does not require changes in the optimization process and can be applied to already trained topologies in a way that outperforms other models. 

  • 33.
    Abbaszadeh Shahri, Abbas
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering. Islamic Azad University.
    An Optimized Artificial Neural Network Structure to Predict Clay Sensitivity in a High Landslide Prone Area Using Piezocone Penetration Test (CPTu) Data: A Case Study in Southwest of Sweden2016In: Geotechnical and Geological Engineering, ISSN 0960-3182, E-ISSN 1573-1529, p. 1-14Article in journal (Refereed)
    Abstract [en]

    Application of artificial neural networks (ANN) in various aspects of geotechnical engineering problems such as site characterization due to have difficulty to solve or interrupt through conventional approaches has demonstrated some degree of success. In the current paper a developed and optimized five layer feed-forward back-propagation neural network with 4-4-4-3-1 topology, network error of 0.00201 and R2 = 0.941 under the conjugate gradient descent ANN training algorithm was introduce to predict the clay sensitivity parameter in a specified area in southwest of Sweden. The close relation of this parameter to occurred landslides in Sweden was the main reason why this study is focused on. For this purpose, the information of 70 piezocone penetration test (CPTu) points was used to model the variations of clay sensitivity and the influences of direct or indirect related parameters to CPTu has been taken into account and discussed in detail. Applied operation process to find the optimized ANN model using various training algorithms as well as different activation functions was the main advantage of this paper. The performance and feasibility of proposed optimized model has been examined and evaluated using various statistical and analytical criteria as well as regression analyses and then compared to in situ field tests and laboratory investigation results. The sensitivity analysis of this study showed that the depth and pore pressure are the two most and cone tip resistance is the least effective factor on prediction of clay sensitivity.

  • 34.
    Abbaszadeh Shahri, Abbas
    et al.
    Johan Lundberg AB, S-75450 Uppsala, Sweden.;Tyrens, Div Rock Engn, S-11886 Stockholm, Sweden..
    Shan, Chunling
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics. Tyrens, Div Rock Engn, S-11886 Stockholm, Sweden..
    Larsson, Stefan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Johansson, Fredrik
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Normalizing Large Scale Sensor-Based MWD Data: An Automated Method toward A Unified Database2024In: Sensors, E-ISSN 1424-8220, Vol. 24, no 4, article id 1209Article in journal (Refereed)
    Abstract [en]

    In the context of geo-infrastructures and specifically tunneling projects, analyzing the large-scale sensor-based measurement-while-drilling (MWD) data plays a pivotal role in assessing rock engineering conditions. However, handling the big MWD data due to multiform stacking is a time-consuming and challenging task. Extracting valuable insights and improving the accuracy of geoengineering interpretations from MWD data necessitates a combination of domain expertise and data science skills in an iterative process. To address these challenges and efficiently normalize and filter out noisy data, an automated processing approach integrating the stepwise technique, mode, and percentile gate bands for both single and peer group-based holes was developed. Subsequently, the mathematical concept of a novel normalizing index for classifying such big datasets was also presented. The visualized results from different geo-infrastructure datasets in Sweden indicated that outliers and noisy data can more efficiently be eliminated using single hole-based normalizing. Additionally, a relational unified PostgreSQL database was created to store and automatically transfer the processed and raw MWD as well as real time grouting data that offers a cost effective and efficient data extraction tool. The generated database is expected to facilitate in-depth investigations and enable application of the artificial intelligence (AI) techniques to predict rock quality conditions and design appropriate support systems based on MWD data.

  • 35. Abbeloos, W.
    et al.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ataer-Cansizoglu, E.
    Taguchi, Y.
    Feng, C.
    Lee, T. -Y
    Detecting and Grouping Identical Objects for Region Proposal and Classification2017In: 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, 2017, Vol. 2017, p. 501-502, article id 8014810Conference paper (Refereed)
    Abstract [en]

    Often multiple instances of an object occur in the same scene, for example in a warehouse. Unsupervised multi-instance object discovery algorithms are able to detect and identify such objects. We use such an algorithm to provide object proposals to a convolutional neural network (CNN) based classifier. This results in fewer regions to evaluate, compared to traditional region proposal algorithms. Additionally, it enables using the joint probability of multiple instances of an object, resulting in improved classification accuracy. The proposed technique can also split a single class into multiple sub-classes corresponding to the different object types, enabling hierarchical classification.

  • 36.
    Abd Alwaheb, Sofia
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Implementering DevSecOps metodik vid systemutveckling för hälso och sjukvård2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In healthcare, IT security is crucial for protecting both personal information and patient safety. Currently, the implementation of security measures and testing is done after software development, which can reduce efficiency, and pose a potential risk to patient privacy. This study examined the implementation of the DevSecOps methodology in healthcare, focusing on the development phase. By interviewing employees and using security tools such as SAST, code review, penetration testing, and DAST, benefits and challenges were identified. The challenges included a lack of security knowledge and difficulty integrating tools for free. Despite this, the results demonstrated the potential to enhance security, streamline operations, and save money by utilizing free tools and implementing security during the development phase. Training and hiring security-competent personnel were also emphasized as important for maintaining high security standards.

    Download full text (pdf)
    fulltext
  • 37. Abd El Ghany, M. A.
    et al.
    El-Moursy, M. A.
    Ismail, Mohammed
    KTH, School of Information and Communication Technology (ICT), Electronic Systems. Ohio State University, Columbus, United States .
    High throughput architecture for high performance NoC2009In: ISCAS: 2009 IEEE International Symposium on Circuits and Systems, IEEE , 2009, p. 2241-2244Conference paper (Refereed)
    Abstract [en]

    High Throughput Butterfly Fat Tree (HTBFT) architecture to achieve high performance Networks on Chip (NoC) is proposed. The architecture increases the throughput of the network by 38% while preserving the average latency. The area of HTBFT switch is decreased by 18% as compared to Butterfly Fat Tree switch. The total metal resources required to implement HTBFT design is increased by 5% as compared to the total metal resources required to implement BFT design. The extra power consumption required to achieve the proposed architecture is 3% of the total power consumption of the BFT architecture.

  • 38.
    Abdallah Hussein Mohammed, Ahmed
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Analyzing common structures in Enterprise Architecture modeling notations2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Over the past few decades, the field of Enterprise Architecture has attracted researchers, and many Enterprise Architecture modeling frameworks have been proposed. However, in order to support the different needs, the different frameworks offer many different elements types that can be used to create an Enterprise Architecture. This abundance of elements can make it difficult for the end-user to differentiate between the usages of all the various elements in order to identify what elements they actually need. Therefore, this research analyzes existing Enterprise Architecture modeling frameworks and extract the common properties that exists in the different Enterprise Architecture modeling notations. In this study, we performed a Systematic Literature Review that aims at finding the most commonly used Enterprise Architecture modeling frameworks in the Enterprise Architecture literature. Additionally, the elements defined in these frameworks are used to create a taxonomy based on the similarities between the different Enterprise Architecture Frameworks. Our results showed that TOGAF, ArchiMate, DoDAF, and IAF are the most used modeling frameworks. Also, we managed to identify the common elements that are available in the different Enterprise Architecture Frameworks mentioned above and represent the common elements in a multilevel model. The findings of this study can make it easier for the end-user to pick the appropriate elements for their use cases, as it highlights the core elements of Enterprise Architecture modeling. Additionally, we showed how our model can be extended to support the needs of different domains. This thesis also forms the foundation for the development of an Enterprise Architecture modeling framework that can be customized and extended so that only the relevant elements are presented to the end-user.

    Download full text (pdf)
    fulltext
  • 39.
    ABDALMAHMOODABADI, MAHBOOBEH
    KTH, School of Information and Communication Technology (ICT).
    The value of downstream information sharing in two-level supply chain: AN APPROACH TO AGENT-BASED MODELING2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Many supply chain firms have taken initiatives to facilitate demand information sharing between downstream and upstream entities. Information sharing is a key factor of collaboration in supply chain management (SCM). In the last decades, many efforts have been made to model supply chain mathematically. Mathematical models are incapable of capturing the dynamic nature of the system. It is necessary to study multidimensional supply chain model in which not only there is communication between supplier and retailer but also communication among retailers must be considered. Mathematical models can be seen as a simple decision making optimization between two entities in which the effect of cooperation of other entities is completely ignored. These models are far from real world systems. The purpose of this thesis is to create an agent-based model, as a substitute to mathematical modeling, to appraise the importance of sharing information on supplier side when there is relation among retailers by means of stock sharing. The conceptual model of two-echelon supply chain is designed and implemented in Java using Repast suit. The model includes four types of agents namely supply chain, supplier, retailer and mediator agents that interact with each other in a discrete event based simulation. Multi level factorial design is used to evaluate performance of supply chain, in terms of total cost saving, under different demand patterns. The significant difference between experimental settings is tested statistically using ANOVA, Pairwise, and Univariate tests. Data analysis indicates that the significance of information sharing can be rather high, in particular when end customers' demands are considerably correlated. Such cost saving that is achieved by sharing information is due to reducing stock level and at the expense of increasing the amount of backorder.

  • 40.
    Abdelgalil, Mohammed Saqr
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lopez-Pernas, Sonsoles
    Idiographic Learning Analytics:A single student (N=1) approach using psychological networks2021Conference paper (Refereed)
    Abstract [en]

    Recent findings in the field of learning analytics have brought to our attention that conclusions drawn from cross-sectional group-level data may not capture the dynamic processes that unfold within each individual learner. In this light, idiographic methods have started to gain grounds in many fields as a possible solution to examine students’ behavior at the individual level by using several data points from each learner to create person-specific insights. In this study, we introduce such novel methods to the learning analytics field by exploring the possible potentials that one can gain from zooming in on the fine-grained dynamics of a single student. Specifically, we make use of Gaussian Graphical Models —an emerging trend in network science— to analyze a single student's dispositions and devise insights specific to him/her. The results of our study revealed that the student under examination may be in need to learn better self-regulation techniques regarding reflection and planning.

  • 41.
    Abdelgalil, Mohammed Saqr
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. University of Eastern Finland, Joensuu, Finland.
    López-Pernas, S.
    Idiographic learning analytics: A single student (N=1) approach using psychological networks2021In: CEUR Workshop Proceedings, CEUR-WS , 2021, p. 16-22Conference paper (Refereed)
    Abstract [en]

    Recent findings in the field of learning analytics have brought to our attention that conclusions drawn from cross-sectional group-level data may not capture the dynamic processes that unfold within each individual learner. In this light, idiographic methods have started to gain grounds in many fields as a possible solution to examine students' behavior at the individual level by using several data points from each learner to create person-specific insights. In this study, we introduce such novel methods to the learning analytics field by exploring the possible potentials that one can gain from zooming in on the fine-grained dynamics of a single student. Specifically, we make use of Gaussian Graphical Models -an emerging trend in network science- to analyze a single student's dispositions and devise insights specific to him/her. The results of our study revealed that the student under examination may be in need to learn better self-regulation techniques regarding reflection and planning. 

  • 42.
    Abdelmassih, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Container Orchestration in Security Demanding Environments at the Swedish Police Authority2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The adoption of containers and container orchestration in cloud computing is motivated by many aspects, from technical and organizational to economic gains. In this climate, even security demanding organizations are interested in such technologies but need reassurance that their requirements can be satisfied. The purpose of this thesis was to investigate how separation of applications could be achieved with Docker and Kubernetes such that it may satisfy the demands of the Swedish Police Authority.

    The investigation consisted of a literature study of research papers and official documentation as well as a technical study of iterative creation of Kubernetes clusters with various changes. A model was defined to represent the requirements for the ideal separation. In addition, a system was introduced to classify the separation requirements of the applications.

    The result of this thesis consists of three architectural proposals for achieving segmentation of Kubernetes cluster networking, two proposed systems to realize the segmentation, and one strategy for providing host-based separation between containers. Each proposal was evaluated and discussed with regard to suitability and risks for the Authority and parties with similar demands. The thesis concludes that a versatile application isolation can be achieved in Docker and Kubernetes. Therefore, the technologies can provide a sufficient degree of separation to be used in security demanding environments.

    Download full text (pdf)
    fulltext
  • 43.
    Abdelmassih, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Hultman, Axel
    KTH, School of Computer Science and Communication (CSC).
    Förutspå golfresultat med hjälp av sentimentanalys på Twitter2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this study we examine the relationship between the sentiment value of golf players’ tweets and their sports results to evaluate the predictive power of the their twitter accounts. Findings on this topic may be of value to bookmakers, gamblers, coaches and fans of sport. Our study is based on two datasets: PGA­tour golf statistics and 112 101 tweets made by 155 profesional golfers over the course of two seasons. The golf players’ sentiment was quantified using the lexical sentiment analysis method AFINN.

    In contrast to other research with similiar methods, our findings suggest that there is low correlation betweet the datasets and that the methods used in our study have low predictive power. Our recommendation is that future studies use additional prediction variables besides sentiment score to better evaluate the predictive power of golf players’ tweets. 

    Download full text (pdf)
    fulltext
  • 44.
    Abdelnour, Jerome
    et al.
    NECOTIS Dept. of Electrical and Computer Engineering, Sherbrooke University, Canada.
    Rouat, Jean
    NECOTIS Dept. of Electrical and Computer Engineering, Sherbrooke University, Canada.
    Salvi, Giampiero
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH. Department of Electronic Systems, Norwegian University of Science and Technology, Norway.
    NAAQA: A Neural Architecture for Acoustic Question Answering2022In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, p. 1-12Article in journal (Refereed)
    Download full text (pdf)
    fulltext
  • 45.
    Abdi Dahir, Najiib
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Dahir Ali, Ikran
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Privacy preserving data access mechanism for health data2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Due to the rise of digitalization and the growing amount of data, ensuring the integrity and security of patient data has become increasingly vital within the healthcare industry, which has traditionally managed substantial quantities of sensitive patient and personal information. This bachelor's thesis focused on designing and implementing a secure data sharing infrastructure to protect the integrity and confidentiality of patient data. Synthetic data was used to enable access for researchers and students in regulated environments without compromising patient privacy. The project successfully achieved its goals by evaluating different privacy-preserving mechanisms and developing a machine learning-based application to demonstrate the functionality of the secure data sharing infrastructure. Despite some challenges, the chosen algorithms showed promising results in terms of privacy preservation and statistical similarity. Ultimately, the use of synthetic data can promote fair decision-making processes and contribute to secure data sharing practices in the healthcare industry.

    Download full text (pdf)
    Examensarbete
  • 46.
    Abdihakim, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Characterizing Feature Influence and Predicting Video Popularity on YouTube2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    YouTube is an online video sharing platform where users can distribute and consume video and other types of content. The rapid technological advancement along with the proliferation och technological gadgets has led to the phenomenon of viral videos where videos and content garner hundreds of thousands if not million of views in a short span of time. This thesis looked at the reason for these viral content, more specifically as it pertains to videos on YouTube. This was done by building a predictor model using two different approaches and extracting important features that causes video popularity. The thesis further observed how the subsequent features impact video popularity via partial dependency plots. The knn model outperformed logistic regression model. The thesis showed, among other things that YouTube channel and title were the most important features followed by comment count, age and video category. Much research have been done pertaining to popularity prediction, but less on deriving important features and evaluating their impact on popularity. Further research has to be conduced on feature influence, which is paramount to comprehend the causes for content going viral. 

    Download full text (pdf)
    fulltext
  • 47.
    Abdinur Iusuf, Joakim
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Nordling, Edvin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Easing the transition from block-based programming in education: Comparing two ways of transitioning from block-based to text-based programming and an alternative way to solve the transition problem2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Many learners find the transition from block-based programming to text-based programming difficult. Consequently, research has investigated how block-based languages support learners when making the transition to text-based programming. It categorized the way in which block-based languages support the transition into one-way transition, dual-modality and hybrid environments. This research investigates how one-way transition environments compare to dual-modality environments with regards to learning a text-based language, and how the two modalities differ with regards to the motivational factors satisfaction, enjoyment and easiness. The results show that dual-modality environments could be a better alternative than one-way transition environment when learners make the transition from block-based to text-based programming. The results also show that solving a problem in dual-modality environments could be easier than solving them in one-way transition environments, which could potentially mean that learners experience more motivation when making the transition in a dual-modality environment. This study also investigated if there is an alternative to one-way transition, dual-modality and hybrid environments when helping learners transition from block-based to text-based programming, and what a learning activity in this alternative solution could look like. It found that Blockly Games is an alternative, and describes a learning activity built in Blockly Games. Future research should aim at gaining a deeper understanding of the differences between one-way transition, dual-modality and hybrid environments, and investigate if the approach taken by Blockly Games is a better alternative.

    Download full text (pdf)
    fulltext
  • 48.
    Abdlwafa, Alan
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Edman, Henrik
    KTH, School of Computer Science and Communication (CSC).
    Distributed Graph Mining: A study of performance advantages in distributed data mining paradigms when processing graphs using PageRank on a single node cluster2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Distributed data mining is a relatively new area within computer science that is steadily growing, emerging from the demands of being able to gather and process various distributed data by utilising clusters. This report presents the properties of graph structured data and what paradigms to use for efficiently processing the data type, based on comprehensive theoretical studies applied on practical tests performed on a single node cluster. The results in the study showcase the various performance aspects of processing graph data, using different open source paradigm frameworks and amount of shards used on input. A conclusion to be drawn from this study is that there are no real performance advantages to using distributed data mining paradigms specifically developed for graph data on single machines. 

    Download full text (pdf)
    fulltext
  • 49.
    Abdollahi, Meisam
    et al.
    Iran Univ Sci & Technol, Tehran, Iran..
    Baharloo, Mohammad
    Inst Res Fundamental Sci IPM, Tehran, Iran..
    Shokouhinia, Fateme
    Amirkabir Univ Technol, Tehran, Iran..
    Ebrahimi, Masoumeh
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    RAP-NoC: Reliability Assessment of Photonic Network-on-Chips, A simulator2021In: Proceedings of the 8th ACM international conference on nanoscale computing and communication (ACM NANOCOM 2021), Association for Computing Machinery (ACM) , 2021Conference paper (Refereed)
    Abstract [en]

    Nowadays, optical network-on-chip is accepted as a promising alternative solution for traditional electrical interconnects due to lower transmission delay and power consumption as well as considerable high data bandwidth. However, silicon photonics struggles with some particular challenges that threaten the reliability of the data transmission process.The most important challenges can be considered as temperature fluctuation, process variation, aging, crosstalk noise, and insertion loss. Although several attempts have been made to investigate the effect of these issues on the reliability of optical network-on-chip, none of them modeled the reliability of photonic network-on-chip in a system-level approach based on basic element failure rate. In this paper, an analytical model-based simulator, called Reliability Assessment of Photonic Network-on-Chips (RAP-NoC), is proposed to evaluate the reliability of different 2D optical network-on-chip architectures and data traffic. The experimental results show that, in general, Mesh topology is more reliable than Torus considering the same size. Increasing the reliability of Microring Resonator (MR) has a more significant impact on the reliability of an optical router rather than a network.

  • 50.
    Abdul Khader, Shahbaz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Future Labs, CH-5405 Baden, Switzerland..
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Falco, Pietro
    ABB Corp Res, S-72178 Västerås, Sweden..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Data-Efficient Model Learning and Prediction for Contact-Rich Manipulation Tasks2020In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, no 3, p. 4321-4328Article in journal (Refereed)
    Abstract [en]

    In this letter, we investigate learning forward dynamics models and multi-step prediction of state variables (long-term prediction) for contact-rich manipulation. The problems are formulated in the context of model-based reinforcement learning (MBRL). We focus on two aspects-discontinuous dynamics and data-efficiency-both of which are important in the identified scope and pose significant challenges to State-of-the-Art methods. We contribute to closing this gap by proposing a method that explicitly adopts a specific hybrid structure for the model while leveraging the uncertainty representation and data-efficiency of Gaussian process. Our experiments on an illustrative moving block task and a 7-DOF robot demonstrate a clear advantage when compared to popular baselines in low data regimes.

1234567 1 - 50 of 17986
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf