kth.sePublications
Change search
Refine search result
12345 1 - 50 of 235
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alexandru, Iordan
    et al.
    Norwegian University of Science and Technology Trondheim.
    Podobas, Artur
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Natvig, Lasse
    Norwegian University of Science and Technology Trondheim.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Investigating the Potential of Energy-savings Using a Fine-grained Task Based Programming Model on Multi-cores2011Conference paper (Refereed)
    Abstract [en]

    In this paper we study the relation between energy-efficiencyand parallel executions when implemented with a fine-grained task-centricprogramming model. Using a simulation framework comprised of an ar-chitectural simulator and a power and area estimation tool, we haveinvestigated the potential energy-savings when employing parallelism onmulti-cores system. In our experiments with 2 - 8 multi-cores systems,we employed frequency and voltage scaling in order to keep the relativeperformance of the systems constant and measured the energy-efficiencyusing the Energy-delay-product. Also, we compared the energy consump-tion of the parallel execution against the serial one. Our results showthat through judicious choice of load balancing parameters, significantimprovements of around 200 % in energy consumption can be acheived.

    Download full text (pdf)
    iordan-podobas-a4mmc-2011.pdf
  • 2.
    Al-Shishtawy, Ahmad
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Enabling and Achieving Self-Management for Large Scale Distributed Systems: Platform and Design Methodology for Self-Management2010Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Autonomic computing is a paradigm that aims at reducing administrative overhead by using autonomic managers to make applications self-managing. To better deal with large-scale dynamic environments; and to improve scalability, robustness, and performance; we advocate for distribution of management functions among several cooperative autonomic managers that coordinate their activities in order to achieve management objectives. Programming autonomic management in turn requires programming environment support and higher level abstractions to become feasible.

    In this thesis we present an introductory part and a number of papers that summaries our work in the area of autonomic computing. We focus on enabling and achieving self-management for large scale and/or dynamic distributed applications. We start by presenting our platform, called Niche, for programming self-managing component-based distributed applications. Niche supports a network-transparent view of system architecture simplifying designing application self-* code.  Niche provides a concise and expressive API for self-* code. The implementation of the framework relies on scalability and robustness of structured overlay networks. We have also developed a distributed file storage service, called YASS, to illustrate and evaluate Niche.

    After introducing Niche we proceed by presenting a methodology and design space for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to the design and development of an improved version of our distributed storage service YASS as a case study.

    We continue by presenting a generic policy-based management framework which has been integrated into Niche. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using our self-managing file storage application YASS as a case study.

    Finally, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of resource hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn.

     

    Download full text (pdf)
    FULLTEXT01
  • 3.
    Al-Shishtawy, Ahmad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Asif Fayyaz, Muhammad
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Popov, Konstantin
    Swedish Institute of Computer Science.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Achieving robust self-management for large-scale distributed applications2010Report (Other (popular science, discussion, etc.))
    Abstract [en]

    Autonomic managers are the main architectural building blocks for constructing self-management capabilities of computing systems and applications. One of the major challenges in developing self-managing applications is robustness of management elements which form autonomic managers. We believe that transparent handling of the effects of resource churn (joins/leaves/failures) on management should be an essential feature of a platform for selfmanaging large-scale dynamic distributed applications, because it facilitates the development of robust autonomic managers and hence improves robustness of self-managing applications. This feature can be achieved by providing a robust management element abstraction that hides churn from the programmer. In this paper, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of nodes hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn. Our proposed decentralized algorithm uses peer-to-peer replica placement schemes to automate replicated state machine migration in order to tolerate churn. Our algorithm exploits lookup and failure detection facilities of a structured overlay network for managing the set of active replicas. Using the proposed approach, we can achieve a long running and highly available service, without human intervention, in the presence of resource churn. In order to validate and evaluate our approach, we have implemented a prototype that includes the proposed algorithm.

     

  • 4.
    Al-Shishtawy, Ahmad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Bao, Lin
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Policy based self-management in distributed environments2010In: 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems Workshop (SASOW), IEEE Computer Society Digital Library, 2010, p. 256-260Conference paper (Refereed)
    Abstract [en]

      Currently, increasing costs and escalating complexities are primary issues in the distributed system management. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. This paper presents a generic policy-based management framework which has been integrated into an existing distributed component management system, called Niche, that enables and supports self-management. In this framework, programmers can set up more than one Policy-Manager-Group to avoid centralized policy decision making which could become a performance bottleneck. Furthermore, the size of a Policy-Manager-Group, i.e. the number of Policy-Managers in the group, depends on their load, i.e. the number of requests per time unit. In order to achieve good load balancing, a policy request is delivered to one of the policy managers in the group randomly chosen on the fly. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using a self-managing file storage application as a case study.

  • 5.
    Al-Shishtawy, Ahmad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Fayyaz, Muhammad Asif
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Popov, Konstantin
    Swedish Institute of Computer Science (SICS), Kista, Sweden.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Achieving Robust Self-Management for Large-Scale Distributed Applications2010In: Self-Adaptive and Self-Organizing Systems (SASO), 2010 4th IEEE International Conference on: SASO 2010, IEEE Computer Society, 2010, p. 31-40Conference paper (Refereed)
    Abstract [sv]

    Achieving self-management can be challenging, particularly in dynamic environments with resource churn (joins/leaves/failures). Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of robust management elements (RMEs), which are able to heal themselves under continuous churn. Using RMEs allows the developer to separate the issue of dealing with the effect of churn on management from the management logic. This facilitates the development of robust management by making the developer focus on managing the application while relying on the platform to provide the robustness of management. RMEs can be implemented as fault-tolerant long-living services. We present a generic approach and an associated algorithm to achieve fault-tolerant long-living services. Our approach is based on replicating a service using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. The algorithm uses P2P replica placement schemes to place replicas and uses the P2P overlay to monitor them. The replicated state machine is extended to analyze monitoring data in order to decide on when and where to migrate. We describe how to use our approach to achieve robust management elements. We present a simulation-based evaluation of our approach which shows its feasibility.

  • 6.
    Al-Shishtawy, Ahmad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Khan, Tareq Jamal
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Robust Fault-Tolerant Majority-Based Key-Value Store Supporting Multiple Consistency Levels2011In: 2011 IEEE 17TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2011, p. 589-596Conference paper (Refereed)
    Abstract [en]

    The wide spread of Web 2.0 applications with rapidly growing amounts of user generated data, such as, wikis, social networks, and media sharing, have posed new challenges on the supporting infrastructure, in particular, on storage systems. In order to meet these challenges, Web 2.0 applications have to tradeoff between the high availability and the consistency of their data. Another important issue is the privacy of user generated data that might be caused by organizations that own and control datacenters where user data are stored. We propose a large-scale, robust and fault-tolerant key-value object store that is based on a peer-to-peer network owned and controlled by a community of users. To meet the demands of Web 2.0 applications, the store supports an API consisting of different read and write operations with various data consistency guarantees from which a wide range of web applications would be able to choose the operations according to their data consistency, performance and availability requirements. For evaluation, simulation has been carried out to test the system availability, scalability and fault-tolerance in a dynamic, Internet wide environment.

  • 7.
    Al-Shishtawy, Ahmad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Brand, Per
    Swedish Institute of Computer Science.
    Haridi, Seif
    Swedish Institute of Computer Science.
    A design methodology for self-management in distributed environments2009In: IEEE International conference on Computational Science and Engineering, 2009, p. 430-436Conference paper (Refereed)
    Abstract [en]

      Autonomic computing is a paradigm that aims at reducing administrative overhead by providing autonomic managers to make applications selfmanaging. In order to better deal with dynamic environments, for improved performance and scalability, we advocate for distribution of management functions among several cooperative managers that coordinate their activities in order to achieve management objectives. We present a methodology for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to design and development of a distributed storage service as a case study. The storage service prototype has been developed using the distributing component management system Niche. Distribution of autonomic managers allows distributing the management overhead and increased management performance due to concurrency and better locality.

  • 8.
    Apelkrans, Mats
    et al.
    Dept of Informatics, Jönköping International Business School.
    Håkansson, Anne
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Visual knowledge modeling of an Information Logistics Process: A case study2005In: Intellectual Capital, Knowledge Management and Organisational Learning: ICICKM 2005 / [ed] Dan Remenyi, Reading, UK: ACPI , 2005Conference paper (Refereed)
  • 9.
    Arad, Cosmin
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Dowling, Jim
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Building and Evaluating P2P Systems using the Kompics Component Framework2009In: 2009 IEEE NINTH INTERNATIONAL CONFERENCE ON PEER-TO-PEER COMPUTING (P2P 2009), NEW YORK: IEEE , 2009, p. 93-94Conference paper (Refereed)
    Abstract [en]

    We present a framework for building and evaluating P2P systems in simulation, local execution, and distributed deployment. Such uniform system evaluations increase confidence in the obtained results. We briefly introduce the Kompics component model and its P2P framework. We describe the component architecture of a Kompics P2P system and show how to define experiment scenarios for large dynamic systems. The same experiments are conducted in reproducible simulation, in real-time execution on a single machine, and distributed over a local cluster or a wide area network. This demonstration shows the component oriented design and the evaluation of two P2P systems implemented in Kompics: Chord and Cyclon. We simulate the systems and then we execute them in real time. During real-time execution we monitor the dynamic behavior of the systems and interact with them through their web-based interfaces. We demonstrate how component-oriented design enables seamless switching between alternative protocols.

  • 10.
    Arad, Cosmin
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Practical Protocol Composition, Encapsulation and Sharing in Kompics2008In: SASOW 2008: SECOND IEEE INTERNATIONAL CONFERENCE ON SELF-ADAPTIVE AND SELF-ORGANIZING SYSTEMS WORKSHOPS, PROCEEDINGS / [ed] Serugendo GD, LOS ALAMITOS: IEEE COMPUTER SOC , 2008, p. 266-271Conference paper (Refereed)
    Abstract [en]

    At the core of any distributed system is a set of concurrent distributed algorithms that coordinate the functionality of the distributed system. We present a software architecture, Kompics that is component-based and compositional which facilitates building distributed protocols. The underlying computation model subsumes that of event-based systems, SEDA (staged event-driven architecture.) and thread-based models. We illustrate various salient features of Kompics such as ease of use, compositionality and configurability through a series of well chosen distributed protocols.

  • 11. Aydt, Heiko
    et al.
    Turner, Stephen J.
    Cai, Wentong
    Low, Malcolm Yoke Hean
    Ong, Yew-Soon
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Toward an Evolutionary Computing Modeling Language2011In: IEEE Transactions on Evolutionary Computation, ISSN 1089-778X, E-ISSN 1941-0026, Vol. 15, no 2, p. 230-247Article in journal (Refereed)
    Abstract [en]

    The importance of domain knowledge in the design of effective evolutionary algorithms (EAs) is widely acknowledged in the meta-heuristics community. In the last few decades, a plethora of EAs has been manually designed by domain experts for solving domain-specific problems. Specialization has been achieved mainly by embedding available domain knowledge into the algorithms. Although programming libraries have been made available to construct EAs, a unifying framework for designing specialized EAs across different problem domains and branches of evolutionary computing does not exist yet. In this paper, we address this issue by introducing an evolutionary computing modeling language (ECML) which is based on the unified modeling language (UML). ECML incorporates basic UML elements and introduces new extensions that are specially needed for the evolutionary computation domain. Subsequently, the concept of meta evolutionary algorithms (MEAs) is introduced as a family of EAs that is capable of interpreting ECML. MEAs are solvers that are not restricted to a particular problem domain or branch of evolutionary computing through the use of ECML. By separating problem-specific domain knowledge from the EA implementation, we show that a unified framework for evolutionary computation can be attained. We demonstrate our approach by applying it to a number of examples.

  • 12.
    Bao, Yan
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    An Implementation of Cache-Coherence for the Nios II ™ Soft-core processor2009Conference paper (Refereed)
    Abstract [en]

    Soft-core programmable processors mapped onto fieldprogrammable gate arrays (FPGA) can be considered as equivalents to a microcontroller. They combine central processing units (CPUs), caches, memories, and peripherals on a single chip. Soft-cores processors represent an increasingly common embedded software implementation option. Modern FPGA soft-cores are parameterized to support application-specific customization. However, these softcore processors are designed to be used in uniprocessor system, not for multiprocessor system. This project describes an implementation to solve the cache coherency problem in an ALTERA Nios II soft-core multiprocessor system.

    Download full text (pdf)
    fulltext
  • 13. Basit, K. A.
    et al.
    Matskin, Mihhail
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    GUMO inspired ontology to support user experience based Citywide Mobile Learning2011In: Proc. - Int. Conf. User Sci. Eng., i-USEr, 2011, p. 195-200Conference paper (Refereed)
    Abstract [en]

    User experience has been extensively discussed in literature, yet the idea of applying it to explain and comprehend the conceptualization of Mobile Learning (ML) is relatively new. Consequently much of the existing works are mainly theoretical and they concentrate to establish and explain the relationship between ML and experience. Little has been done to apply or adopt it into practice. In contrast to the currently existing approaches, this paper presents an ontology to support Citywide Mobile Learning (CML). The ontology presented in this paper addresses three fundamental aspects of CML, namely User Model, User Experience and Places/Spaces which exist in the city. The ontology presented here not only attempts to model and translate the theoretical concepts such as user experience and Place/Spaces for citywide context for Mobile Learning, but also apply them into practice. The discussed ontology is used in our system to support Place/Space based CML.

  • 14. Bichler, Robert M.
    et al.
    Bradley, Gunilla
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Hofkirchner, Wolfgang
    Sustainable development and ICTs2010In: Information, Communication and Society, ISSN 1369-118X, E-ISSN 1468-4462, Vol. 13, no 1, p. 1-5Article in journal (Refereed)
  • 15.
    Boman, Magnus
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS. Swedish Institute of Computer Science, Sweden.
    Commentary: The joy of mesh2009In: BMJ. British Medical Journal, ISSN 0959-8146, E-ISSN 0959-535X, Vol. 337, p. a2500-Article in journal (Refereed)
  • 16.
    Bradley, Gunilla
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Collaboration between people for sustainability in the ICT society2007In: Human Interface and the Management of Information: Interacting in Information Environments, Pt 2, Proceedings / [ed] Smith, MJ; Salvendy, G, BERLIN: SPRINGER-VERLAG BERLIN , 2007, Vol. 4558, p. 703-712Conference paper (Refereed)
    Abstract [en]

    At the present Net Work Period of the IT history deep changes are taken place in collaboration between people and human communication, its structure, quantity, and quality. A dominating steering factor for the design and structure of work life as well as private life is the convergence of three technologies, computer technology, tele technology and media technology (ICT). Telecommunication technology has come to play a more a more dominant role in this convergence, especially internet and web technology. Embedded (ubiquitous) computer technology is making the process invisible, and media technology converges within itself (multimedia or cross media). Well functioning organizational and psychosocial communication are an important prerequisite for successful industrial and social change in the ICT society. Managing and working in an organization organized as a network, involves communication between people, groups, units, other organisations, and various combinations of these entities. ICT applications together with deep knowledge and insights in organisational design and management (ODAM) are the keys to social change. The author describes her convergence theory on ICT and Psychosocial Life Environment with special emphasis on psychosocial communication and sustainability in the Net Era of the ICT society.

  • 17.
    Bradley, Gunilla
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Social and community informatics: Humans on the net2006Book (Other academic)
    Abstract [en]

    As a discipline, Informatics has developed over the years from its initial focus on data processing and software development, towards a more recent emphasis on people’s use of technology and its impact on their working and private lives. Gunilla Bradley, an internationally recognized expert in this field, has researched this area for many years and here, authors this indispensable volume on the topic. Providing a broad and deep analysis of the relationship between people, ICT, society and the environment, Bradley examines the impact on/change in organizations and individuals, both in the workplace and in the home. Taking a firmly humanistic view she also looks to the future as ICT increasingly transforms and impacts on our lives, and explores issues including stress, power, competence and psychosocial communication. She proposes normative research questions for the future and presents actions to achieve the Good ICT society. This thought-provoking book will be of interest to students and academics studying social informatics, computing and MIS as well as organizational behaviour, sociology, psychology and communications. Research-based and cross-disciplinary, Bradley’s book is valuable, and topical, resource.

  • 18.
    Bradley, Gunilla
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    The convergence theory and the good ICT society - Trends and visions2009In: Industrial Engineering and Ergonomics: Visions, Concepts, Methods and Tools, Springer Berlin/Heidelberg, 2009, p. 43-55Chapter in book (Refereed)
    Abstract [en]

    The area of Information and Communication Technology (ICT) and its interaction with social changes on organizational, individual and societal levels has, in the 21st Century attracted increasing attention, due to the depth and wide use of ICT. The focus on the ICT related disciplines has focused far too much on the technology push in contrast to human needs and requirements of the development, introduction and use of ICT. This was also the reason, when organising and chairing the Fourth ODAM conference (Organisational Design and Management) in Stockholm in 1994, that this author gave the conference the subtitle - Development, Introduction and Use of New Technology - Challenges for Human Organisation and Human Resource Development in a Changing World.

  • 19.
    Bradley, Gunilla
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    The convergence theory on ICT, society and human beings - towards the good ICT society2010In: tripleC: Communication, Capitalism & Critique, E-ISSN 1726-670X, Vol. 8, no 2, p. 183-192Article in journal (Refereed)
    Abstract [en]

    The convergence model illustrates ongoing changes in the Net Society. However the theoretical model goes back and synthesises the theoretical framework in research on psychosocial work environment and computerization. Interdisciplinary research programs were initiated by the author in the 70th and then analyzed changes in society related to various periods in "the history" of ICT. The description of the convergence model is structured with reference to the concepts Globalization, ICT, Life Environment, Life Role, Effects on Humans. Both Convergence and Interactions are important features in the model. There are four levels of analysis - individual, organisational, community, and societal.

  • 20.
    Bradley, Gunilla
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    The convergence theory on ICT, society, and human beings: Towards the good ICT society2010In: Information and Communication Technologies, Society and Human Beings: Theory and Framework, IGI Global, 2010, p. 30-46Chapter in book (Refereed)
    Abstract [en]

    The convergence model illustrates ongoing changes in the Net Society. The theoretical model synthesises the theoretical framework in the author's research on the psychosocial work environment and computerization. Interdisciplinary research programs were initiated by the author in the 1970s, leading to analysis of societal changes related to various periods in 'the history' of ICT. The description of the convergence model is structured with reference to the core concepts of Globalisation, ICT, Life Environment, Life Role, and Effects on Humans. Convergence and Interactions are important features of the model that organizes analysis at the individual, organisational, community, and societal levels.

  • 21.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    MipsIt-a simulation and development environment using animation for computer architecture education2002In: / [ed] Ed Gehringer, 2002, p. 65-72Conference paper (Other academic)
    Abstract [en]

    Computer animation is a tool which nowadays is used in more and more fields. In this paper we describe the use of computer animation to support the learning of computer organization itself. MipsIt is a system consisting of a software development environment, a system and cache simulator and a highly flexible microarchitecture simulator used for pipeline studies. It has been in use for several years now and constitutes an important tool in the education at Lund University and KTH, Royal Institute of Technology in Sweden.

    Download full text (pdf)
    FULLTEXT01
  • 22.
    Brorsson, Mats
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Collin, Mikael
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Adaptive and flexible dictionary code compression for embedded applications2006In: Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, 2006, p. 113-124Conference paper (Refereed)
    Abstract [en]

    Dictionary code compression is a technique where long instructions in the memory are replaced with shorter code words used as index in a table to look up the original instructions. We present a new view of dictionary code compression for moderately high-performance processors for embedded applications. Previous work with dictionary code compression has shown decent performance and energy savings results which we verify with our own measurement that are more thorough than previously published. We also augment previous work with a more thorough analysis on the effects of cache and line size changes. In addition, we introduce the concept of aggregated profiling to allow for two or more programs to share the same dictionary contents. Finally, we also introduce dynamic dictionaries where the dictionary contents is considered to be part of the context of a process and show that the performance overhead of reloading the dictionary contents on a context switch is negligible while on the same time we can save considerable energy with a more specialized dictionary contents.

    Download full text (pdf)
    fulltext
  • 23.
    Brouwers, Lisa
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Belastning på samhället vid ett utbrott av den nya pandemiska influensan A(H1N1) - preliminära resultat2009Report (Other academic)
    Download full text (pdf)
    fulltext
  • 24.
    Brouwers, Lisa
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Boman, Magnus
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Camitz, M.
    Swedish Institute for Infectious Disease Control, Department of Epidemiology.
    Mäkilä, K.
    Swedish Institute for Infectious Disease Control, Department of Epidemiology.
    Tegnell, A.
    National Board of Health and Welfare, Stockholm, Sweden.
    Micro-simulation of a smallpox outbreak using official register data2010In: Eurosurveillance, ISSN 1560-7917, Vol. 15, no 35, p. 17-24Article in journal (Refereed)
    Abstract [en]

    To explore the efficacy of four vaccine-based policy strategies (ring vaccination, targeted vaccination, mass vaccination, and pre-vaccination of healthcare personnel combined with ring vaccination) for controlling smallpox outbreaks in Sweden, disease transmission on a spatially explicit social network was simulated. The mixing network was formed from high-coverage official register data of the entire Swedish population, building on the Swedish Total Population Register, the Swedish Employment Register, and the Geographic Database of Sweden. The largest reduction measured in the number of infections was achieved when combining ring vaccination with a pre-vaccination of healthcare personnel. In terms of per dose effectiveness, ring vaccination was by far the most effective strategy. The results can to some extent be adapted to other diseases and environments, including other countries, and the methods used can be analysed in their own right.

  • 25.
    Brouwers, Lisa
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Cakici, Baki
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Camitz, Martin
    Karolinska Institutet, MEB.
    Tegnell, Anders
    Socialstyrelsen.
    Boman, Magnus
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Economic consequences to society of pandemic H1N1 influenza 2009: preliminary results for Sweden2009In: Eurosurveillance, ISSN 1025-496X, E-ISSN 1560-7917, Vol. 14, no 37, p. 19333-Article in journal (Refereed)
    Abstract [en]

    Experiments using a microsimulation platform show that vaccination against pandemic H1N1 influenza is highly cost-effective. Swedish society may reduce the costs of pandemic by about SEK 2.5 billion (approximately EUR 250 million) if at least 60 per cent of the population is vaccinated, even if costs related to death cases are excluded. The cost reduction primarily results from reduced absenteeism. These results are preliminary and based on comprehensive assumptions about the infectiousness and morbidity of the pandemic, which are uncertain in the current situation.

  • 26.
    Brouwers, Lisa
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ekholm, Anders
    Socialdepartementet.
    Janlöv, Nils
    Socialdepartementet.
    Lindblom, Josepha
    Socialdepartementet.
    Mossler, Karin
    Socialdepartementet.
    Den ljusnande framtid är vård: delresultat från LEV-projektet2010Report (Other (popular science, discussion, etc.))
    Download full text (pdf)
    LEV-rapport
  • 27.
    Brouwers, Lisa
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS. KTH, Superseded Departments (pre-2005), Computer and Systems Sciences, DSV.
    Hansson, Karin
    KTH, Superseded Departments (pre-2005), Computer and Systems Sciences, DSV.
    MicroWorlds as a Tool for Policy Making2001In: Proceedings of Cognitive Research with Microworlds / [ed] J J Canada, 2001Conference paper (Refereed)
  • 28.
    Cakici, Baki
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Disease surveillance systems2011Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Recent advances in information and communication technologies have made the development and operation of complex disease surveillance systems technically feasible, and many systems have been proposed to interpret diverse data sources for health-related signals. Implementing these systems for daily use and efficiently interpreting their output, however, remains a technical challenge.

    This thesis presents a method for understanding disease surveillance systems structurally, examines four existing systems, and discusses the implications of developing such systems. The discussion is followed by two papers. The first paper describes the design of a national outbreak detection system for daily disease surveillance. It is currently in use at the Swedish Institute for Communicable Disease Control. The source code has been licenced under GNU v3 and is freely available. The second paper discusses methodological issues in computational epidemiology, and presents the lessons learned from a software development project in which a spatially explicit micro-meso-macro model for the entire Swedish population was built based on registry data.

    Download full text (pdf)
    cakici-lic-2011
  • 29.
    Cakici, Baki
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Boman, Magnus
    A workflow for software development within computational epidemiology2011In: Journal of Computational Science, ISSN 1877-7503, Vol. 2, no 3, p. 216-222Article in journal (Refereed)
    Abstract [en]

    A critical investigation into computational models developed for studying the spread of communicable disease is presented. The case in point is a spatially explicit micro-meso-macro model for the entire Swedish population built on registry data, thus far used for smallpox and for influenza-like illnesses. The lessons learned from a software development project of more than 100 person months are collected into a check list. The list is intended for use by computational epidemiologists and policy makers, and the workflow incorporating these two roles is described in detail.

    Download full text (pdf)
    cakici-boman-jocs2011.pdf
  • 30.
    Cakici, Baki
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Hebing, Kenneth
    Swedish Institute for Infectious Control (SMI), Solna, Sweden.
    Grünewald, Maria
    Swedish Institute for Infectious Control (SMI), Solna, Sweden.
    Saretok, Paul
    Swedish Institute for Infectious Control (SMI), Solna, Sweden.
    Hulth, Anette
    Swedish Institute for Infectious Control (SMI), Solna, Sweden.
    CASE: a framework for computer supported outbreak detection2010In: BMC Medical Informatics and Decision Making, E-ISSN 1472-6947, Vol. 10, p. 14-Article in journal (Refereed)
    Abstract [en]

    Background: In computer supported outbreak detection, a statistical method is applied to a collection of cases to detect any excess cases for a particular disease. Whether a detected aberration is a true outbreak is decided by a human expert. We present a technical framework designed and implemented at the Swedish Institute for Infectious Disease Control for computer supported outbreak detection, where a database of case reports for a large number of infectious diseases can be processed using one or more statistical methods selected by the user. Results: Based on case information, such as diagnosis and date, different statistical algorithms for detecting outbreaks can be applied, both on the disease level and the subtype level. The parameter settings for the algorithms can be configured independently for different diagnoses using the provided graphical interface. Input generators and output parsers are also provided for all supported algorithms. If an outbreak signal is detected, an email notification is sent to the persons listed as receivers for that particular disease. Conclusions: The framework is available as open source software, licensed under GNU General Public License Version 3. By making the code open source, we wish to encourage others to contribute to the future development of computer supported outbreak detection systems, and in particular to the development of the CASE framework.

  • 31.
    Castañeda Lozano, Roberto
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Constraint Programming for Random Testing of a Trading System2010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Financial markets use complex computer trading systems whose failures can cause serious economic damage, making reliability a major concern. Automated random testing has been shown to be useful in finding defects in these systems, but its inherent test oracle problem (automatic generation of the expected system output) is a drawback that has typically prevented its application on a larger scale.

    Two main tasks have been carried out in this thesis as a solution to the test oracle problem. First, an independent model of a real trading system based on constraint programming, a method for solving combinatorial problems, has been created. Then, the model has been integrated as a true test oracle in automated random tests. The test oracle maintains the expected state of an order book throughout a sequence of random trade order actions, and provides the expected output of every auction triggered in the order book by generating a corresponding constraint program that is solved with the aid of a constraint programming system.

    Constraint programming has allowed the development of an inexpensive, yet reliable test oracle. In 500 random test cases, the test oracle has detected two system failures. These failures correspond to defects that had been present for several years without being discovered neither by less complete oracles nor by the application of more systematic testing approaches.

    The main contributions of this thesis are: (1) empirical evidence of both the suitability of applying constraint programming to solve the test oracle problem and the effectiveness of true test oracles in random testing, and (2) a first attempt, as far as the author is aware, to model a non-theoretical continuous double auction using constraint programming.

    Download full text (pdf)
    TRITA-ICT-EX-2010:69.pdf
  • 32.
    Castañeda Lozano, Roberto
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Schulte, Christian
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Wahlberg, Lars
    Testing Continuous Double Auctions with a Constraint-Based Oracle2010In: PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING-CP 2010 / [ed] Cohen D, 2010, Vol. 6308, p. 613-627Conference paper (Refereed)
    Abstract [en]

    Computer trading systems are essential for today's financial markets where the trading systems' correctness is of paramount economical significance. Automated random testing is a useful technique to find bugs in these systems, but it requires an independent system to decide the correctness of the system under test (known as oracle problem). This paper introduces a constraint-based oracle for random testing of a real-world trading system. The oracle provides the expected results by generating and solving constraint models of the trading system's continuous double auction. Constraint programming is essential for the correctness of the test oracle as the logic for calculating trades can be mapped directly to constraint models. The paper shows that the generated constraint models can be solved efficiently. Most importantly, the approach is shown to be successful by finding errors in a deployed financial trading system and in its specification.

  • 33. Cena, Federica
    et al.
    Dokoohaki, Nima
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Matskin, Mihhail
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Forging Trust and Privacy with User Modeling Frameworks: An Ontological Analysis2011In: The First International Conference on Social Eco-Informatics: (SOTICS 2011) / [ed] Dokoohaki and Hall, IARIA , 2011, p. 43-48Conference paper (Refereed)
    Abstract [en]

    With the ever increasing importance of social net- working sites and services, socially intelligent agents who are responsible for gathering, managing and maintaining knowledge surrounding individual users are of increasing interest to both computing research communities as well as industries. For these agents to be able to fully capture and manage the knowledge about a user’s interaction with these social sites and services, a social user model needs to be introduced. A social user model is defined as a generic user model (model capable of capturing generic information related to a user), plus social dimensions of users (models capturing social aspects of user such as activities and social contexts). While existing models capture a proportion of such information, they fail to model and present ones of the most important dimensions of social connectivity: trust and privacy. To this end, in this paper, we introduce an ontological model of social user, composed by a generic user model component, which imports existing well-known user model structures, a social model, which contains social dimensions, and trust, reputation and privacy become the pivotal concepts gluing the whole ontological knowledge models together.

    Download full text (pdf)
    cena-dokoohaki-sotics2011
  • 34. Chu, Geoffrey
    et al.
    Schulte, Christian
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Stuckey, Peter J.
    Confidence-based Work Stealing in Parallel Constraint Programming2009In: Fifteenth International Conference on Principles and Practice of Constraint Programming, Springer Science+Business Media B.V., 2009, Vol. 5732, p. 226-241Conference paper (Refereed)
    Abstract [en]

    The most popular architecture for parallel search is work  stealing: threads that have run out of work (nodes to be  searched) steal from threads that still have work.  Work  stealing not only allows for dynamic load balancing, but also  determines which parts of the search tree are searched next.  Thus the place from where work is stolen has a dramatic effect  on the efficiency of a parallel search algorithm.

      This paper examines quantitatively how optimal work stealing  can be performed given an estimate of the relative solution  densities of the subtrees at each search tree node and relates  it to the branching heuristic strength.  An adaptive work stealing algorithm is presented that  automatically performs different work stealing strategies based  on the confidence of the branching heuristic at each node. Many  parallel depth-first search patterns arise naturally from this  algorithm. The algorithm produces near perfect or  super linear algorithmic efficiencies on all problems tested.  Real speedups using 8 threads range from 7 times to  super linear.

  • 35.
    Collin, Mikael
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Low Power Instruction Fetch using Profiled Variable Length Instructions2003Conference paper (Refereed)
    Abstract [en]

    Computer system performance depends on high access rate and low miss rate in the instruction cache, which also affects energy consumed by fetching instructions. Simulation of a small computer typical for embedded systems show that up to 20% of the overall processor energy is consumed in the instruction fetch path and as much as 23% of the execution time is spent on instruction fetch. One way to increase the instruction memory bandwidth is to fetch more instructions each access without increasing the bus width. We propose an extension to a RISC ISA, with variable length instructions, yielding higher information density without compromising programmability. Based on profiling of dynamic instruction usage and argument locality of a set of SPEC CPU2000 applications, we present a scheme using 8- 16- and 24-bit instructions accompanied by lookup tables inside the processor. Our scheme yields a 20-30% reduction in static memory usage, and experiments show that up to 60% of all executed instructions consist of short instructions. The overall energy savings are up to 15% for the entire data path and memory system, and up to 20% in the instruction fetch path.

  • 36.
    Collin, Mikael
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Low Power Instruction Fetch using Variable Length Instructions2003Conference paper (Refereed)
  • 37.
    Collin, Mikael
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Two-Level Dictionary Code Compression: a New Scheme to Improve Instruction Code Density of Embedded Applications2009In: CGO 2009: INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION, PROCEEDINGS, LOS ALAMITOS: IEEE COMPUTER SOC , 2009, p. 231-242Conference paper (Refereed)
    Abstract [en]

    Dictionary code compression is a technique which has been studied as a method to reduce the energy consumed in the instruction fetch path of processors. Instructions or instruction sequences in the code are replaced with short code words. These code words are later used to index a dictionary which contains the original uncompressed instruction or an entire sequence. In this paper, we present a new method which improves on code density compared to previously published dictionary methods. It uses a two-level dictionary design and is capable of handling compression of both individual instructions and code sequences of 2-16 instructions. The two dictionaries are in separate pipeline stages and work together to decompress sequences and instructions. The impact on storage size for the dictionaries is rather small as the sequences in the dictionary are stored as individually compressed instructions, instead of normal instructions. Compared to previous dictionary code compression methods we achieve improved dynamic compression rate, potential for better performance with reasonable static compression rate and with still small dictionary size suitable for context switching.

  • 38.
    Collin, Mikael
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Öberg, Johnny
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    A performance and energy exploration of dictionary code compression architectures2011In: 2011 International  Green Computing Conference and Workshops (IGCC), IEEE conference proceedings, 2011, p. 1-8Conference paper (Refereed)
    Abstract [en]

    We have made a performance and energy exploration of a previously proposed dictionary code compression mechanism where frequently executed individual instructions and/or sequences are replaced in memory with short code words. Our simulated design shows a dramatically reduced instruction memory access frequency leading to a performance improvement for small instruction cache sizes and to significantly reduced energy consumption in the instruction fetch path. We have evaluated the performance and energy implications of three architectural parameters: branch prediction accuracy, instruction cache size and organization. To asses the complexity of the design we have implemented the critical stages in VHDL.

    Download full text (pdf)
    fulltext
  • 39.
    de Palma, Noel
    et al.
    INRIA, France.
    Popov, Konstantin
    Swedish Institute of Computer Science (SICS), Kista, Sweden.
    Parlavantzas, Nikos
    INRIA, Grenoble, France.
    Brand, Per
    Swedish Institute of Computer Science (SICS), Kista, Sweden.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Tools for Architecture Based Autonomic Systems2009In: ICAS: 2009 Fifth International Conference on Autonomic and Autonomous Systems, IEEE Communications Society, 2009, p. 313-320Conference paper (Refereed)
    Abstract [en]

    Recent years have seen a growing interest in autonomic computing, an approach to providing systems with self managing properties. Autonomic computing aims to address the increasing complexity of the administration of large systems. The contribution of this paper is to provide a generic tool to ease the development of autonomic managers. Using this tool, an administrator provides a set of alternative architectures and specifies conditions that are used by autonomic managers to update architectures at runtime. Software changes are computed as architectural differences in terms of component model artifacts (components, attributes, bindings, etc.). These differences are then used to migrate into the next architecture by reconfiguring only the required part of the running system.

  • 40. Delgado, Alberto
    et al.
    Møller Jensen, Rune
    Schulte, Christian
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Generating Optimal Stowage Plans for Container Vessel Bays2009In: PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING, 2009, Vol. 5732, p. 6-20Conference paper (Refereed)
    Abstract [en]

    Millions of containers are stowed ever.), week with goods worth billions of dollars, but container vessel stowage is an all but neglected combinatorial optimization problem. In this paper, we introduce a model for stowing containers in a vessel bay which is the result of probably the longest collaboration to date with a liner shipping company on automated stowage planning. We then show how to solve this model efficiently in - to our knowledge - the first; application of CP to stowage planning using state-of-the-art techniques such as extensive use of global constraints, viewpoints, static and dynamic symmetry breaking, decomposed branching strategies, and early failure detection. Our CP approach outperforms an integer programming and column generation approach in a preliminary study. Since a complete model of this problem includes even more logical constraints, we believe that stowage planning is a new application area, for CP with a high impact potential.

  • 41.
    Dokoohaki, Nima
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Deliverable D2.1 - Report of User Profile Formal Represen-tation and Metadata Keyword Extension: EU FP7 Smartmuseum project Scientific Deliverable2008Report (Other academic)
    Abstract [en]

    SMARTMUSEUM (Cultural Heritage Knowledge Exchange Platform) is a Research and Development project sponsored under theEuropeans Commission’s 7th Framework. The overall objective of the project is to develop a platform for innovative servicesenhancing on-site personalized access to digital cultural heritage through adaptive and privacy preserving user profiling. Using on-site knowledge databases, global digital libraries and visitors’ experiential knowledge, the platform makes possible the creation ofinnovative multilingual services for increasing interaction between visitors and cultural heritage objects in a future smart museumenvironment, taking full benefit of digitized cultural information.The main objective of this deliverable is to deliver formalization for user profile format as well as giving an extension of keywordsused to describe the human side of access to cultural heritage.

  • 42.
    Dokoohaki, Nima
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Kaleli, Cihan
    Polat, Huseyin
    Matskin, Mihhail
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Achieving Optimal Privacy in Trust-Aware Social Recommender Systems2010In: SOCIAL INFORMATICS / [ed] Bolc L; Makowski M; Wierzbicki A, 2010, Vol. 6430, p. 62-79Conference paper (Refereed)
    Abstract [en]

    Collaborative filtering (CF) recommenders are subject to numerous shortcomings such as centralized processing, vulnerability to shilling attacks, and most important of all privacy. To overcome these obstacles, researchers proposed for utilization of interpersonal trust between users, to alleviate many of these crucial shortcomings. Till now, attention has been mainly paid to strong points about trust-aware recommenders such as alleviating profile sparsity or calculation cost efficiency, while least attention has been paid on investigating the notion of privacy surrounding the disclosure of individual ratings and most importantly protection of trust computation across social networks forming the backbone of these systems. To contribute to addressing problem of privacy in trust-aware recommenders, within this paper, first we introduce a framework for enabling privacy-preserving trust-aware recommendation generation. While trust mechanism aims at elevating recommenders accuracy, to preserve privacy, accuracy of the system needs to be decreased. Since within this context, privacy and accuracy are conflicting goals we show that a Pareto set can be found as an optimal setting for both privacy-preserving and trust-enabling mechanisms. We show that this Pareto set, when used as the configuration for measuring the accuracy of base collaborative filtering engine, yields an optimized tradeoff between conflicting goals of privacy and accuracy. We prove this concept along with applicability of our framework by experimenting with accuracy and privacy factors, and we show through experiment how such optimal set can be inferred.

  • 43.
    Dokoohaki, Nima
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Matskin, Mihhail
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Effective Design of Trust Ontologies for Improvement in the Structure of Socio-Semantic Trust Networks2008In: International Journal On Advances in Intelligent Systems, ISSN 1942-2679, Vol. 1, no 1, p. 23-42Article in journal (Refereed)
  • 44.
    Dokoohaki, Nima
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Matskin, Mihhail
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Personalizing Human Interaction through Hybrid Ontological Profiling: Cultural Heritage Case Study2008In: 1st Workshop on Semantic Web Applications and Human Aspects, (SWAHA08), 2008, p. 133-140Conference paper (Refereed)
    Abstract [en]

    In this paper we present a novel user profile formalization, which allows describingthe user attributes as well as history of user access for personalized, adaptive and interactiveexperience while we believe that our approach is applicable to different semantic applicationswe illustrate our solution in the context of online and onsite museums and exhibits visit. Weargue that a generic structure will allow incorporation of multiple dimensions of user attributesand characteristics as well as allowing different abstraction levels for profile formalization andpresentations. In order to construct such profile structures we extend and enrich existingmetadata vocabularies for cultural heritage to contain keywords pertaining to usage attributesand user related keywords. By extending metadata vocabularies we allow improvedmatchmaking between extended user profile contents and cultural heritage contents. Thisextension creates the possibility of further personalization of access to cultural heritageavailable through online and onsite digital libraries.

  • 45.
    Dokoohaki, Nima
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Matskin, Mihhail
    Norwegian University of Science and Technology.
    Quest: An Adaptive Framework for User Profile Acquisition from Social Communities of Interest2010In: Proceedings - 2010 International Conference on Advances in Social Network Analysis and Mining, ASONAM 2010, 2010, p. 360-364Conference paper (Refereed)
    Abstract [en]

    Within this paper we introduce a framework for semi- to full-automatic discovery and acquisition of bag-of-words style interest profiles from openly accessible Social Web communities. To do such, we construct a semantic taxonomy search tree from target domain (domain towards which we're acquiring profiles for), starting with generic concepts at root down to specific-level instances at leaves, then we utilize one of proposed Quest methods, namely Depth-based, N-Split and Greedy to read the concept labels from the tree and crawl the source Social Network for profiles containing corresponding topics. Cached profiles are then mined in a two-step approach, using a clusterer and a classifier to generate predictive model presenting weighted profiles, which are used later on by a semantic recommender to suggest and recommend the community members with the items of their similar interest.

  • 46.
    Dokoohaki, Nima
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Matskin, Mihhail
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Reasoning about Weighted Semantic User Profiles through Collective Confidence Analysis: A Fuzzy Evaluation2010In: ADVANCES IN INTELLIGENT WEB MASTERING-2, PROCEEDINGS    / [ed] Snasel V; Szczepaniak PS; Abraham A; Kacprzyk J, 2010, Vol. 67, p. 71-81Conference paper (Refereed)
    Abstract [en]

    User profiles are vastly utilized to alleviate the increasing problem of so called information overload. Many important issues of Semantic Web like trust, privacy, matching and ranking have a certain degree of vagueness and involve truth degrees that one requires to present and reason about. In this ground, profiles tend to be useful and allow incorporation of these uncertain attributes in the form of weights into profiled materials. In order to interpret and reason about these uncertain values, we have constructed a fuzzy confidence model, through which these values could be collectively analyzed and interpreted as collective experience confidence of users. We analyze this model within a scenario, comprising weighted user profiles of a semantically enabled cultural heritage knowledge platform. Initial simulation results have shown the benefits of our mechanism for alleviating problem of sparse and empty profiles.

  • 47.
    Dokoohaki, Nima
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ruotsalo, Tuukka
    Helsinki Institute for Information Technology.
    Kauppinen, Tomi
    Helsinki Institute for Information Technology.
    Mäkelä, Eetu
    Helsinki Institute for Information Technology.
    Deliverable 2.2 -Report describing methods for dynamic user profile creation: EU FP7 Smartmuseum Scientific Deliverable2009Report (Other (popular science, discussion, etc.))
    Abstract [en]

    SMARTMUSEUM (Cultural Heritage Knowledge Exchange Platform) is a Research and Development project sponsored under theEuropeans Commission’s 7th Framework. The overall objective of the project is to develop a platform for innovative servicesenhancing on-site personalized access to digital cultural heritage through adaptive and privacy preserving user profiling. Using on-site knowledge databases, global digital libraries and visitors’ experiential knowledge, the platform makes possible the creation ofinnovative multilingual services for increasing interaction between visitors and cultural heritage objects in a future smart museumenvironment, taking full benefit of digitized cultural information.The main objective of this deliverable is to describe a theoretical framework for management of dynamic user profiles.

  • 48.
    Dowling, Jim
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Developing a Distributed Electronic Health-Record Store for India2008In: ERCIM News, ISSN 0926-4981, E-ISSN 1564-0094, no 75, p. 56-57Article, review/survey (Other (popular science, discussion, etc.))
    Abstract [en]

    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India.

    There has been much recent interest in information services that offer to manage an individual's healthcare records in electronic form, with systems such as Microsoft HealthVault and Google Health receiving widespread media attention. These systems are, however, proprietary and fears have been expressed over how the information stored in them will be used. In relation to these developments, countries with nationalized healthcare systems are also investigating the construction of healthcare information systems that store Electronic Health Records (EHRs) for their citizens.

  • 49.
    Drakenberg, N. Peter
    et al.
    KTH, Superseded Departments (pre-2005), Teleinformatics.
    Lundevall, Fredrik
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS. KTH, Superseded Departments (pre-2005), Teleinformatics.
    Lisper, Björn
    Mälardalen University.
    An Efficient Semi-Hierarchical Array Layout2001In: Interaction between Compilers and Computer Architectures / [ed] Gyungho Lee, Pen-Chung Yew, Kluwer Academic Publishers, 2001, p. 21-43Conference paper (Refereed)
    Abstract [en]

    For high-level programming languages, linear array layout (e.g., column major and row major orders) have de facto been the sole form of mapping array elements to memory. The increasingly deep and complex memory hierarchies present in current computer systems expose several deficiencies of linear array layouts. One such deficiency is that linear array layouts strongly favor locality in one index dimension of multidimensional arrays. Secondly, the exact mapping of array elements to cache locations depend on the array’s size, which effectively renders linear array layouts non-analyzable with respect to cache behavior. We present and evaluate an alternative, semi-hierarchical, array layout which differs from linear array layouts by being neutral with respect to locality in different index dimensions and by enabling accurate and precise analysis of cache behaviors at compile-time. Simulation results indicate that the proposed layout may exhibit vastly improved TLB behavior, leading to clearly measurable improvements in execution time, despite a lack of suitable hardware support for address computations. Cache behavior is formalized in terms of conflict vectors, and it is shown how to compute such conflict vectors at compile-time.

  • 50.
    El-Mekawy, Mohamed
    et al.
    DSV, Stockholm universitet.
    Östman, Anders
    Shahzad, Khurram
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Geospatial Integration: Preparing Building Information Databases for Integration with CityGML for Decision Support2008In: Proceedings of Innovations 08th, IEEE Conference, Dubai, December 16-18, 2008Conference paper (Refereed)
    Abstract [en]

    The purpose of this study is to prepare Industrial Foundation Classes (IFC) database for integration with CityGML, so that they can be used for decision support. A case study is carried out for this purpose. Using the factors that affect decision-making in the case study, deficiencies of IFC database are defined. In order to handle these deficiencies, we have prepared IFC for integration with CityGML. The steps for IFC database preparation are: a) identification of sources, b) preliminary IFC schema development, c) 3-D city modeling deficiencies identification and d) extensions to IFC database.

12345 1 - 50 of 235
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf