kth.sePublications
Change search
Refine search result
3456789 251 - 300 of 1225
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251. Comanducci, Dario
    et al.
    Maki, Atsuto
    Toshiba Research Europe Cambridge CB4 0GZ, UK.
    Colombo, Carlo
    Cipolla, Roberto
    2D-3D Photo Rendering for 3D Displays2010Conference paper (Refereed)
  • 252.
    Corcoran, Diarmuid
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science.
    AI-enabled RAN automation2021In: Ericsson Technology Review, ISSN 0014-0171, Vol. 10Article in journal (Other (popular science, discussion, etc.))
    Abstract [en]

    Communication service providers need a greater degree of RAN automation to cope with the increasingly advanced RAN. Getting there will require an increased use of artificial intelligence and machine-learning techniques.

    A significant and growing portion of communication service providers’ (CSPs) opex relates to the manual tuning of algorithms in RANs that do not exploit the full potential of the networks in the field. As 5G and cloud-native RAN implementations continue, the skill level needed to operate the RAN will continue to rise. Our AI-centered approach to RAN automation is designed to overcome both of these challenges. 

    Download full text (pdf)
    fulltext
  • 253.
    Corcoran, Diarmuid
    Ericsson AB, Kista, Sweden.
    Performance overhead of KVM on Linux 3.9 on ARM cortex-a152014In: ACM SIGBED Review Volume 11 Issue 2 June 2014, 2014Conference paper (Refereed)
    Abstract [en]

    A number of simple performance measurements on network, CPU and disk speed were done on a dual ARM Cortex- A15 machine running Linux inside a KVM virtual machine that uses virtio disk and networking. Unexpected behaviour was observed in the CPU and memory intensive benchmarks, and in the networking benchmarks. The average overhead of running inside KVM is between zero and 30 percent when the host is lightly loaded (running only the system software and the necessary qemu-system-arm virtualization code), but the relative overhead increases when both host and VM is busy. We conjecture that this is related to the scheduling inside the host Linux.

  • 254.
    Corcoran, Diarmuid
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Systematic Data-Driven Continual Self-Learning2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    There is a lot of unexploited potential in using data-driven and self-learning methods to dramatically improve automatic decision-making and control in complex industrial systems. So far, and on a relatively small scale, these methods have demonstrated some potential to achieve performance gains for the automated tuning of complex distributed systems. However, many difficult questions and challenges remain in relation to how to design methods and organise their deployment and operation into large-scale real-world systems. For systematic and scalable integration of state-of-the-art machine learning into such systems, we propose a structured architectural approach.

    To understand the essential elements of this architecture, we identify a set of foundational challenges and then derive a set of five research questions. These questions drill into the essential and complex interdependency between data streams, self-learning algorithms that never stop learning and the supporting reference and run-time architectural structures. While there is a need for traditional one-shot supervised models, pushing the technical boundaries of automating all classes of machine learning model training will require a continual approach. 

    To support continual learning, real-time data streams are complemented with accurate synthetic data generated for use in model training. By developing and integrating advanced simulations, models can be trained before deployment into a live system, for which system accuracy is then measured quantitatively in realistic scenarios. Reinforcement learning, exploring an action space and qualifying effective dynamic action combinations, is here employed for effective network policy learning. While single-agent and centralised model training may be appropriate in some cases, distributed multi-agent self-learning is essential in industrial scale systems, and thus such a scalable and energy-efficient approach is developed, implemented and analysed in detail. 

    Energy usage minimisation in software and hardware intense communication systems, such as the 5G radio access system, is an important and difficult problem in its own right. Our work has focused on energy-aware approaches to applying self-learning methods both to energy reduction applications and algorithms. Using this approach, we can demonstrate clear energy savings while at the same time improving system performance.

    Perhaps most importantly, our work attempts to form an understanding of the broader industrial system issues of applying self-learning approaches at scale. Our results take some clear, formative, steps towards large-scale industrialisation of self-learning approaches in communication systems such as 5G.

    Download full text (pdf)
    thesis
  • 255. Corcoran, Diarmuid
    The Good, The Bad And The Ugly: Experiences With Model Driven Development In Large Scale Projects At Ericsson: Part of the Lecture Notes in Computer Science book series (LNCS, volume 6138)2010Conference paper (Other academic)
    Abstract [en]

    his talk will deal with the practical experiences of large-scale deployment of Model Driven Engineering practises within parts of the Ericsson development organisation. We try to present a balanced argument in favour of why Model Driven Development is a powerful concept in large-scale engineering projects, but also cover many of its nasty aspects and attempt to reason upon the nature of these failings. We then finish up with a look at the future of Model Driven Development as we see it and present a taste of our vision of the future.

  • 256.
    Corcoran, Diarmuid
    Ericsson Software Research, Stockholm, Sweden.
    Toward a Tailored Modeling of Non-Functional Requirements for Telecommunication Systems2011In: 2011 Eighth International Conference on Information Technology: New Generations, 2011Conference paper (Refereed)
    Abstract [en]

    Addressing non-functional requirements in Real-Time Embedded Systems (RTES) is of critical importance. Proper functionality of the whole system is heavily dependent on satisfying these requirements. In model-based approaches for development of the systems in RTES domain, there are several methods and languages for modeling and analysis of non-functional requirements. However, in this domain there are different types of systems that have different sets of non-functional requirements. The problem is that the general modeling approaches for RTES may not cover all the needs of these sub domains such as telecommunication. In this poster paper, we suggest an approach to complement and apply general RTES modeling languages to better cover different non-functional requirements of telecommunication systems.

  • 257.
    Corcoran, Diarmuid
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science.
    Ermedahl, Andreas
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Granbom, Catrin
    Artificial intelligence in RAN – a software framework for AI-driven RAN automation2020In: Ericsson Technology Review, ISSN 0014-0171Article in journal (Other (popular science, discussion, etc.))
    Abstract [en]

    Artificial intelligence and its subfield machine learning offer well-established techniques for solving historically difficult multi-parameterization problems. Used correctly, these techniques have tremendous potential to overcome complex cross-domain automation challenges in radio networks.

    Our ongoing research reveals that an integrated framework of software enablers will be essential to success.

    Download full text (pdf)
    ETR-AI
  • 258.
    Corcoran, Diarmuid
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. Ericsson AB.
    Kreuger, Per
    RISE AI Research Institutes of Sweden, Kista, Sweden.
    Boman, Magnus
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Reinforcement Learning for Automated Energy Efficient Mobile Network Performance Tuning2021In: Proceedings of the 2021 17th International Conference on Network and Service Management: Smart Management for Future Networks and Services, CNSM 2021, Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 216-224Conference paper (Refereed)
    Abstract [en]

    Modern mobile networks are increasingly complex from a resource management perspective, with diverse combinations of software, infrastructure elements and services that need to be configured and tuned for correct and efficient operation. It is well accepted in the communications community that appropriately dimensioned, efficient and reliable configurations of systems like 5G or indeed its predecessor 4G is a massive technical challenge. One promising avenue is the application of machine learning methods to apply a data-driven and continuous learning approach to automated system performance tuning. We demonstrate the effectiveness of policy-gradient reinforcement learning as a way to learn and apply complex interleaving patterns of radio resource block usage in 4G and 5G, in order to automate the reduction of cell edge interference. We show that our method can increase overall spectral efficiency up to 25% and increase the overall system energy efficiency up to 50% in very challenging scenarios by learning how to do more with less system resources. We also introduce a flexible phased and continuous learning approach that can be used to train a bootstrap model in a simulated environment after which the model is transferred to a live system for continuous contextual learning. 

  • 259.
    Corcoran, Diarmuid
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. Ericsson AB.
    Kreuger, Per
    Schulte, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Efficient Real-Time Traffic Generation for 5G RAN2020In: Proceedings of IEEE/IFIP Network Operations and Management Symposium 2020: Management in the Age of Softwarization and Artificial Intelligence, NOMS 2020, Institute of Electrical and Electronics Engineers (IEEE) , 2020, article id 9110314Conference paper (Refereed)
    Abstract [en]

    Modern telecommunication and mobile networks are increasingly complex from a resource management perspective, with diverse combinations of software and infrastructure elements that need to be configured and tuned for efficient operation with high quality of service. Increased real-time automation at all levels and time-frames is a critical tool in controlling this complexity. A key component in automation is practical and accurate simulation methods that can be used in live traffic scenarios. This paper introduces a new method with supporting algorithms for sampling key parameters from live or recorded traffic which can be used to generate large volumes of synthetic traffic with very similar rate distributions and temporal characteristics. Multiple spatial renewal processes are used to generate fractional Gaussian noise, which is scaled and transformed into a log-normal rate distribution with discrete arrival events, fitted to the properties observed in given recorded traces. This approach works well for modelling large user aggregates but is especially useful for medium sized and relatively small aggregates, where existing methods struggle to reproduce the most important properties of recorded traces. The technique is demonstrated through experimental comparisons with data collected from an operational LTE network to be highly useful in supporting self-learning and automation algorithms which can ultimately reduce complexity, increase energy efficiency, and reduce total network operation costs.

  • 260. Coroama, V. C.
    et al.
    Schien, D.
    Preist, C.
    Hilty, Lorenz M.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Sustainable Communications, CESC.
    The energy intensity of the internet: Home and access networks2015In: Advances in Intelligent Systems and Computing, ISSN 2194-5357, E-ISSN 2194-5365, Vol. 310, p. 137-155Article in journal (Refereed)
    Abstract [en]

    Estimates of the energy intensity of the Internet diverge by several orders of magnitude. We present existing assessments and identify diverging definitions of the system boundary as the main reason for this large spread. The decision of whether or not to include end devices influences the result by 1–2 orders of magnitude. If end devices are excluded, customer premises equipment (CPE) and access networks have a dominant influence. Of less influence is the consideration of cooling equipment and other overhead, redundancy equipment, and the amplifiers in the optical fibers. We argue against the inclusion of end devices when assessing the energy intensity of the Internet, but in favor of including CPE, access networks, redundancy equipment, cooling and other overhead as well as optical fibers. We further show that the intensities of the metro and core network are best modeled as energy per data, while the intensity of CPE and access networks are best modeled as energy per time (i.e., power), making overall assessments challenging. The chapter concludes with a formula for the energy intensity of CPE and access networks. The formula is presented both in generic form as well as with concrete estimates of the average case to be used in quick assessments by practitioners. The next chapter develops a similar formula for the core and edge networks. Taken together, the two chapters provide an assessment method of the Internet’s energy intensity that takes into account different modeling paradigms for different parts of the network.

  • 261. Corodescu, A. -A
    et al.
    Nikolov, N.
    Khan, A. Q.
    Soylu, A.
    Matskin, Mihhail
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Payberah, Amir H.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Roman, D.
    Locality-aware workflow orchestration for big data2021In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2021, p. 62-70Conference paper (Refereed)
    Abstract [en]

    The development of the Edge computing paradigm shifts data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructure. Such a paradigm requires data processing solutions that consider data locality in order to reduce the performance penalties from data transfers between remote (in network terms) data centres. However, existing Big Data processing solutions have limited support for handling data locality and are inefficient in processing small and frequent events specific to Edge environments. This paper proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. Our solution considers any available data locality information by default, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare our system with Argo workflow and show significant performance improvements in terms of speed of execution for processing units of data using our data locality aware Big Data workflow approach. 

  • 262.
    Corodescu, Andrei-Alin
    et al.
    Univ Oslo, Dept Informat, N-0373 Oslo, Norway..
    Nikolov, Nikolay
    SINTEF AS, Software & Serv Innovat, N-0373 Oslo, Norway..
    Khan, Akif Quddus
    Norwegian Univ Sci & Technol, Dept Comp Sci, N-2815 Gjovik, Norway..
    Soylu, Ahmet
    OsloMet Oslo Metropolitan Univ, Dept Comp Sci, N-0166 Oslo, Norway..
    Matskin, Mihhail
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Payberah, Amir H.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Roman, Dumitru
    SINTEF AS, Software & Serv Innovat, N-0373 Oslo, Norway..
    Big Data Workflows: Locality-Aware Orchestration Using Software Containers2021In: Sensors, E-ISSN 1424-8220, Vol. 21, no 24, article id 8212Article in journal (Refereed)
    Abstract [en]

    The emergence of the edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing big data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric big data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.

  • 263. Coto, A.
    et al.
    Guanciale, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Tuosto, E.
    On Testing Message-Passing Components2020In: Leveraging Applications of Formal Methods, Verification and Validation, Springer Nature , 2020, Vol. 12476, p. 22-38Conference paper (Refereed)
    Abstract [en]

    We instantiate and apply a recently proposed abstract framework featuring an algorithm for the automatic generation of tests for component testing of message-passing systems. We demonstrate the application of a top-down mechanism for test generation. More precisely, we reduce the problem of generating tests for components of message-passing applications to the projection of global views of choreographies. The application of the framework to some examples gives us the pretext to make some considerations about our approach.

  • 264.
    Coto, Alex
    et al.
    Gran Sasso Science InstituteL’AquilaItaly.
    Guanciale, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Tuosto, Emilio
    Gran Sasso Science InstituteL’AquilaItaly.
    Choreographic development of message-passing applications: a tutorial2020In: Lecture Notes in Computer Science book series, Springer , 2020, Vol. 12134, p. 20-36Conference paper (Refereed)
    Abstract [en]

    Choreographic development envisages distributed coordination as determined by interactions that allow peer components to harmoniously realise a given task. Unlike in orchestration-based coordination, there is no special component directing the execution. Recently, choreographic approaches have become popular in industrial contexts where reliability and scalability are crucial factors. This tutorial reviews some recent ideas to harness choreographic development of message-passing software. The key features of the approach are showcased within, a toolchain which allows software architects to identify defects of message-passing applications at early stages of development.

  • 265. Coyle, D.
    et al.
    Thieme, A.
    Linehan, C.
    Balaam, Madeline
    Wallace, J.
    Lindley, S.
    Emotional wellbeing2014In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 72, no 8-9Article in journal (Refereed)
  • 266.
    Cremona, Fabio
    et al.
    Univ Calif Berkeley, Berkeley, CA 94720 USA..
    Lohstroh, Marten
    Univ Calif Berkeley, Comp Sci, Berkeley, CA 94720 USA..
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Lee, Edward A.
    Univ Calif Berkeley, EECS, Berkeley, CA 94720 USA.;Univ Calif Berkeley, EE Div, Berkeley, CA 94720 USA.;Univ Calif Berkeley, EECS Dept, Berkeley, CA 94720 USA..
    Masin, Michael
    IBM Res Haifa, Syst & IoT Engn Grp, Haifa, Israel..
    Tripakis, Stavros
    Univ Calif Berkeley, Berkeley, CA 94720 USA.;Aalto Univ, Espoo, Finland..
    Hybrid co-simulation: it's about time2019In: Software and Systems Modeling, ISSN 1619-1366, E-ISSN 1619-1374, Vol. 18, no 3, p. 1655-1679Article in journal (Refereed)
    Abstract [en]

    Model-based design methodologies are commonly used in industry for the development of complex cyber-physical systems (CPSs). There are many different languages, tools, and formalisms for model-based design, each with its strengths and weaknesses. Instead of accepting some weaknesses of a particular tool, an alternative is to embrace heterogeneity, and to develop tool integration platforms and protocols to leverage the strengths from different environments. A fairly recent attempt in this direction is the functional mock-up interface (FMI) standard that includes support for co-simulation. Although this standard has reached acceptance in industry, it provides only limited support for simulating systems that mix continuous and discrete behavior, which are typical of CPS. This paper identifies the representation of time as a key problem, because the FMI representation does not support well the discrete events that typically occur at the cyber-physical boundary. We analyze alternatives for representing time in hybrid co-simulation and conclude that a superdense model of time using integers only solves many of these problems. We show how an execution engine can pick an adequate time resolution, and how disparities between time representations internal to co-simulated components and the resulting effects of time quantization can be managed. We propose a concrete extension to the FMI standard for supporting hybrid co-simulation that includes integer time, automatic choice of time resolution, and the use of absent signals. We explain how these extensions can be implemented modularly within the frameworks of existing simulation environments.

  • 267.
    Damnoenkittikun, Ratchai
    KTH, School of Information and Communication Technology (ICT).
    Social Media in VideoCafé2.0: How Social Media promote informal communication in VideoCafe2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Video Café 2.0 is a media space that aims to support informal communication among the EIT laboratories which are located in different countries: Sweden, Finland, Netherland, Germany and France. The system creates an always-­‐online, real-­‐time acoustic and video environment allowing people to communicate at anytime. The first version of VideoCafe was originated by K. Tollmar in 1996 to study how video-­‐ mediated communication technology plays a role in informal communication at workplace. The result has shown that the number of established relationships between two labs increased from 26 to 39 after 6 month. The study also reported that there were very little activities through the system because there is no real context of informal contacts, the communication with unfamiliar people is difficult to occur.

    To overcome the problems mentioned above, social media service (cyber space) is augmented to the corresponding physical space. Twitter is introduced as an additional tool. This powerful media is very accessible allowing users to create contents, share and receive information in real-­‐time. It can be seen as a playful media that engages people during causal time at workplace. The core research question is how social media can promote interaction in VideoCafe. This question is hypothesized that people will use social media as an asynchronous communication channel as well as a synchronous communication channel to support the video conferencing session. We also expected the message from other EIT nodes will keep users in touch, and worldwide news feeds will engage people to the social display which increases the chance of interaction.

    A simple version of twitter client is provided on the top of video stream. Users can read and write messages from the VideoCafe display, mobile phone and other devices. We also propose a new mode for twitter client, called TwitterShow. This mode animates Twitter messages in 3D in full screen with transparent background. The purpose is to make the social screen more attractive. Three evaluation techniques are proposed to assess and shape the prototypes in different ways. These comprise a quick and dirty evaluation, a usability testing (retrospective think aloud protocol) and a field study. The evaluations were conducted along with the iterative design.

    Many design errors were detected and fixed during the quick and dirty evaluation, and usability testing. The outcome is the final prototype which will be installed in the field study. This experiment ran for two weeks. The result show that twitter as backchannel in media space support asynchronous more than synchronous interactions. And these interactions are short and mostly about current situation happening around physical spaces. Even though people feel that they did not learn about other nodes, status update help them feel more connected. This means it can raise awareness of all VideoCafe nodes. The tool also facilitated synchronous collaboration when the main channel became problematic. During the test, twitter was used to complement to the main channel, and it can initiate casual encounters. In users’ perspective, twitter is not a very interesting engagement tool, since two-­‐way communications barely happened. They felt that twitter increase very few interactions.

  • 268.
    Dan, Jurca
    et al.
    NTT DOCOMO Eurolabs in Munich, Germany.
    Stadler, Rolf
    KTH, School of Electrical Engineering (EES).
    H-GAP: Estimating Histograms of Local Variables with Accuracy Objectives for Distributed Real-Time Monitoring2010In: IEEE Transactions on Network and Service Management, ISSN 1932-4537, E-ISSN 1932-4537, Vol. 7, no 2, p. 83-95Article in journal (Refereed)
    Abstract [en]

    We present H-GAP, a protocol for continuous monitoring,which provides a management station with the valuedistribution of local variables across the network. The protocolestimates the histogram of local state variables for a givenaccuracy and with minimal overhead. H-GAP is decentralizedand asynchronous to achieve robustness and scalability, and itexecutes on an overlay interconnecting management processesin network devices. On this overlay, the protocol maintains aspanning tree and updates the histogram through incrementalaggregation. The protocol is tunable in the sense that it allowscontrolling, at runtime, the trade-off between protocol overheadand an accuracy objective. This functionality is realized throughdynamic configuration of local filters that control the flow ofupdates towards the management station. The paper includes ananalysis of the problem of histogram aggregation over aggregationtrees, a formulation of the global optimization problem, anda distributed solution containing heuristic, tree-based algorithms.Using SUM as an example, we show how general aggregationfunctions over local variables can be efficiently computed withH-GAP. We evaluate our protocol through simulation using realtraces. The results demonstrate the controllability of H-GAP ina selection of scenarios and its efficiency in large-scale networks.

    Download full text (pdf)
    fulltext
  • 269. Daneshtalab, M.
    et al.
    Hemani, Ahmed
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Palesi, M.
    Message from the chairs2013In: MES '13Proceedings of the first International Workshop on Many-core Embedded Systems, 2013Conference paper (Refereed)
  • 270.
    Danniswara, Ken
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Peiro Sajjad, Hooman
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Al-Shishtawy, Ahmad
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Stream Processing in Community Network Clouds2015In: Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, IEEE conference proceedings, 2015, p. 800-805Conference paper (Refereed)
    Abstract [en]

    Community Network Cloud is an emerging distributed cloud infrastructure that is built on top of a community network. The infrastructure consists of a number of geographically distributed compute and storage resources, contributed by community members, that are linked together through the community network. Stream processing is an important enabling technology that, if provided in a Community Network Cloud, would enable a new class of applications, such as social analysis, anomaly detection, and smart home power management. However, modern stream processing engines are designed to be used inside a data center, where servers communicate over a fast and reliable network. In this work, we evaluate the Apache Storm stream processing framework in an emulated Community Network Cloud in order to identify the challenges and bottlenecks that exist in the current implementation. The community network emulation was performed using data collected from the Guifi.net community network, Spain. Our evaluation results show that, with proper configuration of the heartbeats, it is possible to run Apache Storm in a Community Network Cloud. The performance is sensitive to the placement of the Storm components in the network. The deployment of management components on wellconnected nodes improves the Storm topology scheduling time, fault tolerance, and recovery time. Our evaluation also indicates that the Storm scheduler and the stream groupings need to be aware of the network topology and location of stream sources in order to optimally place Storm spouts and bolts to improve performance.

    Download full text (pdf)
    Stream Processing in Community Network Clouds
  • 271.
    Datta, Anwitaman
    et al.
    Swiss Federal Institute of Technology Lausanne.
    Girdzijauskas, Sarunas
    Swiss Federal Institute of Technology Lausanne.
    Aberer, Karl
    Swiss Federal Institute of Technology Lausanne.
    On de Bruijn routing in distributed hash tables: There and back again2004In: The 4th IEEE International Conference on Peer-to-Peer Computing, proceedings, 2004, p. 159-166Conference paper (Refereed)
    Abstract [en]

    We show in this paper that de Bruijn networks, despite providing efficient search while using constant routing table size, as well as simplicity of the understanding and implementation of such networks, are unsuitable where key distribution will be uneven, a realistic scenario for most practical applications. In presence of arbitrarily skewed data distribution, it has only recently been shown that some traditional P2P overlay networks with non-constant (typically logarithmic) instead of constant routing table size can meet conflicting objectives of storage load balancing as well as search efficiency. So this paper while showing that de Bruijn networks fail, to meet these dual objectives, opens up a more general problem for the research community as to whether P2P systems with constant routing table can at all achieve the conflicting objectives of retaining search efficiency as well as storage load balancing, while preserving key ordering (which leads to uneven key distribution).

  • 272. Daudi, M.
    et al.
    Thoben, K. -D
    Baalsrud Hauge, Jannicke
    KTH, School of Technology and Health (STH). Bremer Institut für Produktion und Logistik at the University of Bremen, Bremen, Germany.
    An Approach for Surfacing Hidden Intentions and Trustworthiness in Logistics Resource Sharing Networks2018In: 19th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2018, Springer-Verlag New York, 2018, Vol. 534, p. 524-536Conference paper (Refereed)
    Abstract [en]

    Collaboration on sharing logistics resources aims to balance supply and demand of the idle, inefficiently, and underutilized resources. Although sharing is beneficial, many issues such as privacy, security, time, regulations, safety, biased reviews, and ratings hinder the sharing. Such problems procreate many uncertainties, which as a consequence, lead to low trust in sharing resources. Meanwhile, existing solutions such as trust and reputation mechanism, and online reviews and ratings incorporate the least consideration to monitor hidden intentions and behaviors of partners. Therefore, this paper proposes an approach to surface hidden intentions and trustworthiness of partners involved in sharing resources. The approach stands on cognitive principles to explore intentions and trustworthiness of suppliers and consumers of logistics resources. Application of the proposed approach is illustrated using industrial case extracted from ridesharing platform.

  • 273. Davidson, N.
    et al.
    Vines, J.
    Bartindale, T.
    Sutton, S.
    Green, D.
    Comber, R.
    Balaam, M.
    Olivier, P.
    Vance, G.
    Supporting self-care of adolescents with nut allergy through video and mobile educational tools2017In: Conference on Human Factors in Computing Systems - Proceedings, 2017, Vol. 2017-JanuaConference paper (Refereed)
    Abstract [en]

    Anaphylaxis is a life-threatening allergic reaction which is rapid in onset. Adolescents living with anaphylaxis risk often lack the knowledge and skills required to safely manage their condition or talk to friends about it. We designed an educational intervention comprising group discussion around videos of simulated anaphylaxis scenarios and a mobile application containing video-based branching anaphylaxis narratives. We trialed the intervention with 36 nut allergic adolescents. At 1-year follow-up participants had improved adrenaline auto-injector skills and carriage, disease- and age-specific Quality of Life and confidence in anaphylaxis management. At 3-year follow-up adrenaline carriage improved further and confidence remained higher. Participants expressed how the education session was a turning point in taking control of their allergy and how the app facilitated sharing about anaphylaxis with others. We contribute insights regarding design of mobile self-care and peer-support applications for health in adolescence, and discuss strengths and limitations of video-based mobile health interventions.

    Download full text (pdf)
    fulltext
  • 274. Davidsson, P.
    et al.
    Verhagen, Henricus
    KTH.
    Social phenomena simulation2012In: Computational Complexity: Theory, Techniques, and Applications, Springer-Verlag New York, 2012, Vol. 9781461418009, p. 2999-3003Chapter in book (Other academic)
  • 275.
    de C. Gomes, Pedro
    et al.
    KTH.
    Gurov, Dilian
    KTH.
    Huisman, M.
    Artho, Cyrille
    KTH.
    Specification and verification of synchronization with condition variables2018In: Science of Computer Programming, ISSN 0167-6423, E-ISSN 1872-7964, Vol. 163, p. 174-189Article in journal (Refereed)
    Abstract [en]

    This paper proposes a technique to specify and verify the correct synchronization of concurrent programs with condition variables. We define correctness of synchronization as the liveness property: “every thread synchronizing under a set of condition variables eventually exits the synchronization block”, under the assumption that every such thread eventually reaches its synchronization block. Our technique does not avoid the combinatorial explosion of interleavings of thread behaviours. Instead, we alleviate it by abstracting away all details that are irrelevant to the synchronization behaviour of the program, which is typically significantly smaller than its overall behaviour. First, we introduce SyncTask, a simple imperative language to specify parallel computations that synchronize via condition variables. We consider a SyncTask program to have a correct synchronization iff it terminates. Further, to relieve the programmer from the burden of providing specifications in SyncTask, we introduce an economic annotation scheme for Java programs to assist the automated extraction of SyncTask programs capturing the synchronization behaviour of the underlying program. We show that every Java program annotated according to the scheme (and satisfying the assumption mentioned above) has a correct synchronization iff its corresponding SyncTask program terminates. We then show how to transform the verification of termination of the SyncTask program into a standard reachability problem over Coloured Petri Nets that is efficiently solvable by existing Petri Net analysis tools. Both the SyncTask program extraction and the generation of Petri Nets are implemented in our STAVE tool. We evaluate the proposed framework on a number of test cases.

  • 276.
    de Carvalho Gomes, Pedro
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Automatic Extraction of Program Models for Formal Software Verification2015Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool

    Download full text (pdf)
    Thesis
  • 277.
    de Carvalho Gomes, Pedro
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Sound Modular Extraction of Control Flow Graphs from Java Bytecode2012Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Control flow graphs (CFGs) are abstract program models that preserve the control flow information. They have been widely utilized for many static analyses in the past decades. Unfortunately, previous studies about the CFG construction from modern languages, such as Java, have either neglected advanced features that influence the control flow, or do not provide a correctness argument. This is a bearable issue for some program analyses, but not for formal methods, where the soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, when developing open systems, i.e., systems in which at least one component is missing, one may want to extract CFGs to verify the available components. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependencies involving missing components. In this work we present two variants of a CFG extraction algorithm from Java bytecode considering precise exceptional flow, which are sound w.r.t to the JVM behavior. The first algorithm extracts CFGs from fully-provided (closed) programs only. It proceeds in two phases. Initially the Java bytecode is translated into a stack-less intermediate representation named BIR, which provides explicit representation of exceptions, and is more compact than the original bytecode. Next, we define the transformation from BIR to CFGs, which, among other features, considers the propagation of uncaught exceptions within method calls. We then establish its correctness: the behavior of the extracted CFGs is shown to be a sound over-approximation of the behavior of the original programs. Thus, temporal safety properties that hold for the CFGs also hold for the program. We prove this by suitably combining the properties of the two transformations with those of a previous idealized CFG extraction algorithm, whose correctness has been proven directly. The second variant of the algorithm is defined for open systems. We generalize the extraction algorithm for closed systems for a modular set-up, and resolve inter-dependencies involving missing components by using user-provided interfaces. We establish its correctness by defining a refinement relation between open systems, which constrains the instantiation of missing components. We prove that if the relation holds, then the CFGs extracted from the components of the original open system are sound over-approximations of the CFGs for the same components in the refined system. Thus, temporal safety properties that hold for an open system also hold for closed systems that refine it. We have implemented both algorithms as the ConFlEx tool. It uses Sawja, an external library for the static analysis of Java bytecode, to transform bytecode into BIR, and to resolve virtual method calls. We have extended Sawja to support open systems, and improved its exception type analysis. Experimental results have shown that the algorithm for closed systems generates more precise CFGs than the modular algorithm. This was expected, due to the heavy over-approximations the latter has to perform to be sound. Also, both algorithms are linear in the number of bytecode instructions. Therefore, ConFlEx is efficient for the extraction of CFGs from either open, or closed Java bytecode programs.

    Download full text (pdf)
    lic_thesis-pedro_gomes
  • 278.
    de Carvalho Gomes, Pedro
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Campos, Sérgio
    Universidade Federal de Minas Gerais.
    Borges Vieira, Alex
    Universidade Federal de Juiz de Fora.
    Verification of P2P live streaming systems using symmetry-based semiautomatic abstractions2012In: Proceedings of the 2012 International Conference on High Performance Computing and Simulation, HPCS 2012, IEEE , 2012, p. 343-349Conference paper (Refereed)
    Abstract [en]

    P2P systems are one of the most efficient data transport technologies in use today. Particularly, P2P live streaming systems have been growing in popularity recently. However, analyzing such systems is difficult. Developers are not able to realize a complete test due the due to system size and complex dynamic behavior. This may lead us to develop protocols with errors, unfair or even with low performance. One way of performing such an analysis is using formal methods. Model Checking is one such method that can be used for the formal verification of P2P systems. However it suffers from the combinatory explosion of states. The problem can be minimized with techniques such as abstraction and symmetry reduction. This work combines both techniques to produce reduced models that can be verified in feasible time. We present a methodology to generate abstract models of reactive systems semi-automatically, based on the model's symmetry. It defines modeling premises to make the abstraction procedure semiautomatic, i.e., without modification of the model. Moreover, it presents abstraction patterns based on the system symmetry and shows which properties are consistent with each pattern. The reductions obtained by the methodology were significant. In our test case of a P2P network, it has enabled the verification of liveness properties over the abstract models which did not finish with the original model after more than two weeks of intensive computation. Our results indicate that the use of model checking for the verification of P2P systems is feasible, and that our modeling methodology can increase the efficiency of the verification algorithms enough to enable the analysis of real complex P2P live streaming systems.

  • 279.
    de Carvalho Gomes, Pedro
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Gurov, Dilian
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Huisman, Marieke
    University of Twente.
    Algorithmic Verification of Synchronization with Condition Variables2015Report (Other academic)
    Abstract [en]

    Condition variables are a common synchronization mechanism present in many programming languages. Still, due to the combinatorial complexity of the behaviours the mechanism induces, it has not been addressed sufficiently with formal techniques. In this paper we propose a fully automated technique to prove the correct synchronization of concurrent programs synchronizing via condition variables, where under correctness we mean the liveness property: "If for every set of condition variables, every thread synchronizing under the variables of the set eventually enters its synchronization block, then every thread will eventually exit the synchronization".First, we introduce SyncTask, a simple imperative language to specify parallel computations that synchronize via condition variables. Next, we model the constructs of the language as Petri Net components, and define rules to extract and compose nets from a SyncTask program. We show that a SyncTask program terminates if and only if the corresponding Petri Net always reaches a particular final marking. We thus transform the verification of termination into a reachability problem on the net, which can be solved efficiently by existing Petri Net analysis tools. Further, to relieve the programmer from the burden of having to provide specifications in SyncTask, we introduce an economic annotation scheme for Java programs to assist the automated extraction of SyncTask programs capturing the synchronization behaviour of the underlying program. We show that, for the Java programs that can be annotated according to the scheme, the above-mentioned liveness property holds if and only if the corresponding SyncTask program terminates. Both the SyncTask program extraction and the generation of Petri Nets are implemented in the STaVe tool. We evaluate the proposed verification framework on a number of test cases

    Download full text (pdf)
    Report
  • 280.
    de Carvalho Gomes, Pedro
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Picoco, Attilio
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Gurov, Dilian
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Sound Extraction of Control-Flow Graphs from open Java Bytecode Systems2012Report (Other academic)
    Abstract [en]

    Formal verification techniques have been widely deployed as means to ensure the quality of software products. Unfortunately, they suffer with the combinatorial explosion of the state space. That is, programs have a large number of states, sometimes infinite. A common approach to alleviate the problem is to perform the verification over abstract models from the program. Control-flow graphs (CFG) are one of the most common models, and have been widely studied in the past decades. Unfortunately, previous works over modern programming languages, such as Java, have either neglected features that influence the control-flow, or do not provide a correctness argument about the CFG construction. This is an unbearable issue for formal verification, where soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, one may want to extract CFGs from the available components of an open system. I.e., a system whose at least one of the components is missing. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependences between software components.

    In the current work we present a framework to extract control-flow graphs from open Java Bytecode systems in a modular fashion. Our strategy requires the user to provide interfaces for the missing components. First, we present a formal definition of open Java bytecode systems. Next, we generalize a previous algorithm that performs the extraction of CFGs for closed programs to a modular set-up. The algorithm uses the user-provided interfaces to resolve inter-dependences involving missing components. Eventually the missing components will arrive, and the open system will become closed, and can execute. However, the arrival of a component may affect the soundness of CFGs which have been extracted previously. Thus, we define a refinement relation, which is a set of constraints upon the arrival of components, and prove that the relation guarantees the soundness of CFGs extracted with the modular algorithm. Therefore, the control-flow safety properties verified over the original CFGs still hold in the refined model.

    We implemented the modular extraction framework in the ConFlEx tool. Also, we have implemented the reusage from previous extractions, to enable the incremental extraction of a newly arrived component. Our technique performs substantial over-approximations to achieve soundness. Despite this, our test cases show that ConFlEx is efficient. Also, the extraction of the CFGs gets considerable speed-up by reusing results from previous analyses.

    Download full text (pdf)
    fulltext
  • 281.
    de Fréin, Ruairí
    KTH, School of Electrical Engineering (EES), Communication Networks. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Effect of System Load on Video Service Metrics2015In: Signals and Systems Conference (ISSC) / [ed] IEEE, IEEE conference proceedings, 2015, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Model selection, in order to learn the mapping between the kernel metrics of a machine in a server cluster and a service quality metric on a client's machine, has been addressed by directly applying Linear Regression (LR) to the observations. The popularity of the LR approach is due to: 1) its implementation efficiency; 2) its low computational complexity; and finally, 3) it generally captures the data relatively accurately. LR, can however, produce misleading results if the LR model does not characterize the system: this deception is due in part to its accuracy. In the client-server service modeling literature LR is applied to the server and client metrics without treating the load on the system as the cause for the excitation of the system. By contrast, in this paper, we propose a generative model for the server and client metrics and a hierarchical model to explain the mapping between them, which is cognizant of the effects of the load on the system. Evaluations using real traces support the following conclusions: The system load accounts for ≥ 50% of the energy of a high proportion of the client and server metric traces -modeling the load is crucial; the load signal is localized in the frequency domain: we can remove the load by deconvolution; There is a significant phase shift between both the kernel and the service-level metrics, which, coupled with the load, heavily biases the results obtained from out-of-the-box LR without any system identification pre-processing.

    Download full text (pdf)
    fulltext
  • 282.
    de Fréin, Ruairí
    Telecommunications Software and Systems Group, WIT, Ireland.
    Ghostbusters: A Parts-based NMF Algorithm2013In: IEEE Irish Signals and Systems Conference / [ed] IET, IEEE, IET, 2013, p. 1-8Conference paper (Refereed)
    Abstract [en]

    An exact nonnegative matrix decomposition algorithm is proposed. This is achieved by 1) Taking a nonlinear approximation of a sparse real-valued dataset at a given tolerance-to-error constraint, c; Choosing an arbitrary lectic ordering on the rows or column entries; And, then systematically applying a closure operator, so that all closures are selected. Assuming a nonnegative hierarchical closure structure (a Galois lattice) ensures the data has a unique ordered overcomplete dictionary representation. Parts-based constraints on these closures can then be used to specify and supervise the form of the solution. We illustrate that this approach outperforms NMF on two standard NMF datasets: it exhibits the properties described above; It is correct and exact.

  • 283.
    de Fréin, Ruairí
    KTH, School of Electrical Engineering (EES), Communication Networks. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Learning Convolutive Features for Storage and Transmission between Networked Sensors2015In: IEEE International Joint Conference on Neural Networks / [ed] IEEE, IEEE , 2015, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Discovering an efficient representation that reflects the structure of a signal ensemble is a requirement of many Machine Learning and Signal Processing methods, and gaining increasing prevalence in sensing systems. This type of representation can be constructed by Convolutive Non-negative Matrix Factorization (CNMF), which finds parts-based convolutive representations of non-negative data. However, convolutive extensions of NMF have not yet considered storage efficiency as a side constraint during the learning procedure. To address this challenge, we describe a new algorithm that fuses ideas from the 1) parts-based learning and 2) integer sequence compression literature. The resulting algorithm, Storable NMF (SNMF), enjoys the merits of both techniques: it retains the good-approximation properties of CNMF while also taking into account the size of the symbol set which is used to express the learned convolutive factors and activations. We argue that CNMF is not as amenable to transmission and storage, in networked sensing systems, as SNMF. We demonstrate that SNMF yields a compression ratio ranging from 10:1 up to 20:1, depending on the signal, which gives rise to a similar bandwidth saving for networked sensors.

  • 284.
    de Fréin, Ruairí
    KTH, School of Electrical Engineering (EES), Communication Networks. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    METHOD AND APPARATUS FOR DETERMINING WHETHER A NODE CAN REPRESENT OR BE REPRESENTED BY OTHER NODES WITHIN A NETWORK2013Patent (Other (popular science, discussion, etc.))
    Abstract [en]

    A distributed method for determining whether a node can represent or be represented by other nodes within a network comprises receiving at a node (vn), respective sets of observations (kl) for each neighbour node (N(vn)) of the node across respective links ({vn,vk})) within the network. For each link, a measure (Dn,k) of dis-similarly between the observations for a neighbour node (kl) and the corresponding observations (nl) for the node is determined. Respective inequality measures (Ek) for each neighbour node (N(vn)) of the node are determined, each inequality measure being a function of respective dissimilarity measures (Dn,k) and weights (pn,k) for each link between a neighbour node (vk) and its neighbour nodes. For each link ({vn,vk}) between the node and a respective neighbour node, a weight (pn,k) is updated as a function of the dissimilarity measure (Dn,k) and a previous weight for the link. For each link between the node and a respective neighbour node, the node determines as a function of an inequality measure for the node (En) and the determined inequality measure for the neighbour node (Ek) whether a link between the node and the neighbour node should be maintained. The node then determines, based on the links maintained by the node, if the node can represent neighbour nodes within the network.

  • 285.
    de Fréin, Ruairí
    KTH, School of Electrical Engineering (EES), Communication Networks. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. Waterford Institute of Technology,Ireland.
    Take off a Load: Load-Adjusted Video Quality Prediction and Measurement2015In: 13th IEEE International Conference on Dependable, Autonomic and Secure Computing: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing,, IEEE Computer Society, 2015, p. 1887-1895Conference paper (Refereed)
    Abstract [en]

    An algorithm for predicting the quality of video received by a client from a shared server is presented. A statistical model for this client-server system, in the presence of other clients, is proposed. Our contribution is that we explicitly account for the interfering clients, namely the load. Once the load on the system is understood, accurate client-server predictions are possible with an accuracy of 12.4% load adjusted normalized mean absolute error. We continue by showing that performance measurement is a challenging sub-problem in this scenario. Using the correct measure of prediction performance is crucial. Performance measurement is miss-leading, leading to potential over-confidence in the results, if the effect of the load is ignored. We show that previous predictors have over (and under) estimated the quality of their prediction performance by up to 50% in some cases, due to the use of an inappropriate measure. These predictors are not performing as well as stated for about 60% of the service levels predicted. In summary we achieve predictions which are ≈50% more accurate than previous work using just ≈2% of the data to achieve this performance gain –a significant reduction in computational complexity results. 

    Download full text (pdf)
    fulltext
  • 286.
    de Fréin, Ruairí
    et al.
    TSSG, Waterford Institute of Technology, Waterford, Ireland.
    Olariu, Cristian
    University College Dublin.
    Song, Yuqian
    Trinity College Dublin.
    Brennan, Rob
    Trinity College Dublin.
    McDonagh, Patrick
    University College Dublin.
    Hava, Adriana
    University College Dublin.
    Thorpe, Christina
    University College Dublin.
    Murphy, John
    University College Dublin.
    Murphy, Liam
    University College Dublin.
    French, Paul
    IBM .
    Integration of QoS Metrics, Rules and Semantic Uplift for Advanced IPTV Monitoring2014In: Journal of Network and Systems Management, ISSN 1573-7705, Vol. 23, no 3, p. 1-36Article in journal (Refereed)
    Abstract [en]

    Increasing and variable traffic demands due to triple play services pose significant Internet Protocol Television (IPTV) resource management challenges for service providers. Managing subscriber expectations via consolidated IPTV quality reporting will play a crucial role in guaranteeing return-on-investment for players in the increasingly competitive IPTV delivery ecosystem. We propose a fault diagnosis and problem isolation solution that addresses the IPTV monitoring challenge and recommends problem-specific remedial action. IPTV delivery-specific metrics are collected at various points in the delivery topology, the residential gateway and the Digital Subscriber Line Access Multiplexer through to the video Head-End. They are then pre-processed using new metric rules. A semantic uplift engine takes these raw metric logs; it then transforms them into World Wide Web Consortium’s standard Resource Description Framework for knowledge representation and annotates them with expert knowledge from the IPTV domain. This system is then integrated with a monitoring visualization framework that displays monitoring events, alarms, and recommends solutions. A suite of IPTV fault scenarios is presented and used to evaluate the feasibility of the solution. We demonstrate that professional service providers can provide timely reports on the quality of IPTV service delivery using this system.

    Download full text (pdf)
    fulltext
  • 287.
    de Medeiros, Jose. E. G.
    et al.
    Univ Brasilia, Dept Elect Engn, Brasilia, DF, Brazil..
    Ungureanu, George
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Sander, Ingo
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    An Algebra for Modeling Continuous Time Systems2018In: PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), IEEE, 2018, p. 861-864Conference paper (Refereed)
    Abstract [en]

    Advancements on analog integrated design have led to new possibilities for complex systems combining both continuous and discrete time modules on a signal processing chain. However, this also increases the complexity any design flow needs to address in order to describe a synergy between the two domains, as the interactions between them should be better understood. We believe that a common language for describing continuous and discrete time computations is beneficial for such a goal and a step towards it is to gain insight and describe more fundamental building blocks. In this work we present an algebra based on the General Purpose Analog Computer, a theoretical model of computation recently updated as a continuous time equivalent of the Turing Machine.

    Download full text (pdf)
    fulltext
  • 288.
    de Palma, Noel
    et al.
    INRIA, France.
    Popov, Konstantin
    Swedish Institute of Computer Science (SICS), Kista, Sweden.
    Parlavantzas, Nikos
    INRIA, Grenoble, France.
    Brand, Per
    Swedish Institute of Computer Science (SICS), Kista, Sweden.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Tools for Architecture Based Autonomic Systems2009In: ICAS: 2009 Fifth International Conference on Autonomic and Autonomous Systems, IEEE Communications Society, 2009, p. 313-320Conference paper (Refereed)
    Abstract [en]

    Recent years have seen a growing interest in autonomic computing, an approach to providing systems with self managing properties. Autonomic computing aims to address the increasing complexity of the administration of large systems. The contribution of this paper is to provide a generic tool to ease the development of autonomic managers. Using this tool, an administrator provides a set of alternative architectures and specifies conditions that are used by autonomic managers to update architectures at runtime. Software changes are computed as architectural differences in terms of component model artifacts (components, attributes, bindings, etc.). These differences are then used to migrate into the next architecture by reconfiguring only the required part of the running system.

  • 289.
    Deb, Abhijit Kumar
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Hemani, Ahmed
    Öberg, Johnny
    A Heterogeneous Modeling Environment of DSP Systems Using Grammar Based Approach2000In: 3rd Forum on Design Languages (FDL-2000), 2000, p. 365-370Conference paper (Refereed)
  • 290.
    Deb, Abhijit Kumar
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Hemani, Ahmed
    Öberg, Johnny
    Grammar based Design and Verification of an LPC Speech Codec: A Case Study2000In: Int. Conf. on Signal Processing Applications & Technology (ICSPAT-2000), 2000Conference paper (Refereed)
  • 291. Denil, Joachim
    et al.
    Han, Gang
    Persson, Magnus
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    De Meulenaere, Paul
    Zeng, Haibo
    Vangheluwe, Hans
    Model-Driven Engineering Approaches to Design Space Exploration2013Report (Other academic)
    Abstract [en]

    During the design and deployment of increasingly complex distributed embedded systems, engineers are challenged by a plethora of design choices. This often results in infeasible or sub-optimal solutions. In industry and academia, general and domain-specific optimization techniques are developed to explore the tradeoffs within these design spaces, though these techniques are usually not adapted for use within a Model- Driven Engineering (MDE) process. In this paper we propose to encode metaheuristics in transformation models as a general design exploration method. This is complemented by an MDE framework for combining different heterogeneous techniques at different abstraction layers using model transformations. Our approach results in the seamless integration of design space exploration in the MDE process. The proposed methods are applied on the deployment of an automotive embedded system, yielding a set of Pareto-optimal solutions.

  • 292.
    Dessalk, Yared Dejene
    et al.
    KTH.
    Nikolov, N.
    Matskin, Mihhail
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Soylu, A.
    Roman, D.
    Scalable Execution of Big Data Workflows using Software Containers2020In: Proceedings of the 12th International Conference on Management of Digital EcoSystems, MEDES 2020, Association for Computing Machinery, Inc , 2020, p. 76-83Conference paper (Refereed)
    Abstract [en]

    Big Data processing involves handling large and complex data sets, incorporating different tools and frameworks as well as other processes that help organisations make sense of their data collected from various sources. This set of operations, referred to as Big Data workflows, require taking advantage of the elasticity of cloud infrastructures for scalability. In this paper, we present the design and prototype implementation of a Big Data workflow approach based on the use of software container technologies and message-oriented middleware (MOM) to enable highly scalable workflow execution. The approach is demonstrated in a use case together with a set of experiments that demonstrate the practical applicability of the proposed approach for the scalable execution of Big Data workflows. Furthermore, we present a scalability comparison of our proposed approach with that of Argo Workflows-one of the most prominent tools in the area of Big Data workflows.

  • 293.
    Di Bona, Gabriele
    et al.
    Queen Mary Univ London, Sch Math Sci, London E1 4NS, England..
    Di Gaetano, Leonardo
    Cent European Univ, Dept Network & Data Sci, A-1100 Vienna, Austria..
    Latora, Vito
    Queen Mary Univ London, Sch Math Sci, London E1 4NS, England.;Univ Catania, Dipartimento Fis Astron, I-95123 Catania, Italy.;INFN, I-95123 Catania, Italy.;Complex Sci Hub Vienna CSHV, Josefstadter Str 39, A-1080 Vienna, Austria..
    Coghi, Francesco
    Nordita SU; Stockholm Univ, Hannes Alfvens vag 12, SE-10691 Stockholm, Sweden..
    Maximal dispersion of adaptive random walks2022In: Physical Review Research, E-ISSN 2643-1564, Vol. 4, no 4, article id L042051Article in journal (Refereed)
    Abstract [en]

    Maximum entropy random walks (MERWs) are maximally dispersing and play a key role in optimizing information spreading in various contexts. However, building MERWs comes at the cost of knowing beforehand the global structure of the network, a requirement that makes them totally inadequate in real-case scenarios. Here, we propose an adaptive random walk (ARW), which instead maximizes dispersion by updating its transition rule on the local information collected while exploring the network. We show how to derive ARW via a large-deviation representation of MERW and study its dynamics on synthetic and real-world networks.

  • 294. Di Fatta, G.
    et al.
    Liotta, A.
    Agoulmine, N.
    Agrawal, G.
    Berthold, M. R.
    Bordini, R. H.
    Borgelt, C.
    Boutaba, R.
    Calzarossa, M. C.
    Cannataro, M.
    Choudhary, A.
    Cortes, U.
    Dagiuklas, T.
    De Turck, F.
    De Vleeschauwer, B.
    Demestichas, P.
    Dhoedt, B.
    Festor, O.
    Fortino, G.
    Friderikos, V.
    Giunchiglia, F.
    Gravier, C.
    Guo, Y.
    Hunter, D.
    Karypis, G.
    Krishnaswamy, S.
    Limam, N.
    Medhi, D.
    Merani, M. L.
    Nürnberger, A.
    Pardede, E.
    Parthasarathy, S.
    Gaspary, L. P.
    Ranc, D.
    Sivakumar, K.
    Stadler, Rolf
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Stiller, B.
    Strassner, J.
    Syed, A.
    Talia, D.
    Urso, M. A.
    Van Der Meer, S.
    Wolff, R.
    Granville, L. Z.
    Preface2011In: IEEE International Conference on Data Mining. Proceedings, ISSN 1550-4786, p. xlviii-xlvix, article id 6137551Article in journal (Refereed)
  • 295. Dima, E.
    et al.
    Brunnstrom, K.
    Sjostrom, M.
    Andersson, M.
    Edlund, J.
    Johanson, M.
    Qureshi, T.
    View position impact on QoE in an immersive telepresence system for remote operation2019In: 2019 11th International Conference on Quality of Multimedia Experience, QoMEX 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate how different viewing positions affect a user's Quality of Experience (QoE) and performance in an immersive telepresence system. A QoE experiment has been conducted with 27 participants to assess the general subjective experience and the performance of remotely operating a toy excavator. Two view positions have been tested, an overhead and a ground-level view, respectively, which encourage reliance on stereoscopic depth cues to different extents for accurate operation. Results demonstrate a significant difference between ground and overhead views: the ground view increased the perceived difficulty of the task, whereas the overhead view increased the perceived accomplishment as well as the objective performance of the task. The perceived helpfulness of the overhead view was also significant according to the participants. 

  • 296.
    Din, Eman
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Exploring the Use of Blockchain Technology to Address Cybersecurity Risks in Banking and Finance2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The growing reliance on digital services has led to an escalation in cyber risks and attacks targeting banks and financial institutions. Such cyber threats necessitate innovative solutions. But to achieve it, one needs to overcome the challenges of seeking reliable information on utilizing blockchain technology to combat cyber-attacks. Hence, this work aims to use a systematic literature review to study the potential of blockchain technology to mitigate cyber threats in the banking and financial sector as well as the advantages and challenges of blockchain technology. Blockchain provides heightened security, improv transaction processes, ensures transparency and privacy, optimizes operational performance, and reduces fraud. Despite its advantages, challenges like regulatory compliance, privacy concerns, and scalability need careful consideration. Prominent institutions like Bank of America, HSBC, and J.P. Morgan are actively investigating blockchain integration. To ensure wise choices regarding blockchain use in organizations with valuable data and assets, advisors and business leaders must fully grasp both the advantages and possible drawbacks of this technology. Armed with this information, financial institutions gain essential insights, steering them towards judicious choices concerning the integration and utilization of blockchain technology, thereby enhancing, and reinforcing their cybersecurity framework.

    Download full text (pdf)
    fulltext
  • 297.
    Domingues, Rémi
    KTH, School of Computer Science and Communication (CSC).
    Machine Learning for Unsupervised Fraud Detection2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Fraud is a threat that most online service providers must address in the development of their systems to ensure an efficient security policy and the integrity of their revenue. Amadeus, a Global Distribution System providing a transaction platform for flight booking by travel agents, is targeted by fraud attempts that could lead to revenue losses and indemnifications.

    The objective of this thesis is to detect fraud attempts by applying machine learning algorithms to bookings represented by Passenger Name Record history. Due to the lack of labelled data, the current study presents a benchmark of unsupervised algorithms and aggregation methods. It also describes anomaly detection techniques which can be applied to self-organizing maps and hierarchical clustering. Considering the important amount of transactions per second processed by Amadeus back-ends, we eventually highlight potential bottlenecks and alternatives.

    Download full text (pdf)
    fulltext
  • 298.
    Domke, Jens
    et al.
    RIKEN Center for Computational Science, 7-1-26 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo, Japan, 650-0047, 7-1-26 Minatojima-minamimachi, Chuo-ku, Hyogo.
    Vatai, Emil
    RIKEN Center for Computational Science, 7-1-26 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo, Japan, 650-0047, 7-1-26 Minatojima-minamimachi, Chuo-ku, Hyogo.
    Gerofi, Balazs
    Intel Corporation, 2111 NE 25th Ave, Hillsboro, Oregon, United States, 97124, 2111 NE 25th Ave.
    Kodama, Yuetsu
    RIKEN Center for Computational Science, 7-1-26 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo, Japan, 650-0047, 7-1-26 Minatojima-minamimachi, Chuo-ku, Hyogo.
    Wahib, Mohamed
    RIKEN Center for Computational Science, 7-1-26 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo, Japan, 650-0047, 7-1-26 Minatojima-minamimachi, Chuo-ku, Hyogo.
    Podobas, Artur
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. KTH Royal Institute of Technology, Brinellvägen 8, Stockholm, Stockholm, Sweden, 114 28, Brinellvägen 8, Stockholm.
    Mittal, Sparsh
    Indian Institute of Technology, Roorkee - Haridwar Highway, Roorkee, Uttarakhand, India, Roorkee - Haridwar Highway, Uttarakhand.
    Pericàs, Miquel
    Chalmers University of Technology, Chalmersplatsen 4, Göteborg, Västra Götaland, Sweden, 412 96, Chalmersplatsen 4, Västra Götaland.
    Zhang, Lingqi
    Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo, Tokyo, 2-12-1 Ookayama, Meguro-ku, Tokyo.
    Chen, Peng
    National Institute of Advanced Industrial Science and Technology, 1-8-31 Midorigaoka, Ikeda-ku, Osaka, Osaka, Japan, 563-0026, 1-8-31 Midorigaoka, Ikeda-ku, Osaka.
    Drozd, Aleksandr
    RIKEN Center for Computational Science, 7-1-26 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo, Japan, 650-0047, 7-1-26 Minatojima-minamimachi, Chuo-ku, Hyogo.
    Matsuoka, Satoshi
    RIKEN Center for Computational Science, 7-1-26 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo, Japan, 650-0047, 7-1-26 Minatojima-minamimachi, Chuo-ku, Hyogo.
    At the Locus of Performance: Quantifying the Effects of Copious 3D-Stacked Cache on HPC Workloads2023In: ACM Transactions on Architecture and Code Optimization (TACO), ISSN 1544-3566, E-ISSN 1544-3973, Vol. 20, no 4, article id 57Article in journal (Refereed)
    Abstract [en]

    Over the last three decades, innovations in the memory subsystem were primarily targeted at overcoming the data movement bottleneck. In this paper, we focus on a specific market trend in memory technology: 3D-stacked memory and caches. We investigate the impact of extending the on-chip memory capabilities in future HPC-focused processors, particularly by 3D-stacked SRAM. First, we propose a method oblivious to the memory subsystem to gauge the upper-bound in performance improvements when data movement costs are eliminated. Then, using the gem5 simulator, we model two variants of a hypothetical LARge Cache processor (LARC), fabricated in 1.5 nm and enriched with high-capacity 3D-stacked cache. With a volume of experiments involving a broad set of proxy-applications and benchmarks, we aim to reveal how HPC CPU performance will evolve, and conclude an average boost of 9.56× for cache-sensitive HPC applications, on a per-chip basis. Additionally, we exhaustively document our methodological exploration to motivate HPC centers to drive their own technological agenda through enhanced co-design.

  • 299.
    Drakenberg, N. Peter
    et al.
    KTH, Superseded Departments (pre-2005), Teleinformatics.
    Lundevall, Fredrik
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS. KTH, Superseded Departments (pre-2005), Teleinformatics.
    Lisper, Björn
    Mälardalen University.
    An Efficient Semi-Hierarchical Array Layout2001In: Interaction between Compilers and Computer Architectures / [ed] Gyungho Lee, Pen-Chung Yew, Kluwer Academic Publishers, 2001, p. 21-43Conference paper (Refereed)
    Abstract [en]

    For high-level programming languages, linear array layout (e.g., column major and row major orders) have de facto been the sole form of mapping array elements to memory. The increasingly deep and complex memory hierarchies present in current computer systems expose several deficiencies of linear array layouts. One such deficiency is that linear array layouts strongly favor locality in one index dimension of multidimensional arrays. Secondly, the exact mapping of array elements to cache locations depend on the array’s size, which effectively renders linear array layouts non-analyzable with respect to cache behavior. We present and evaluate an alternative, semi-hierarchical, array layout which differs from linear array layouts by being neutral with respect to locality in different index dimensions and by enabling accurate and precise analysis of cache behaviors at compile-time. Simulation results indicate that the proposed layout may exhibit vastly improved TLB behavior, leading to clearly measurable improvements in execution time, despite a lack of suitable hardware support for address computations. Cache behavior is formalized in terms of conflict vectors, and it is shown how to compute such conflict vectors at compile-time.

  • 300. Du, M.
    et al.
    Hammerschmidt, C.
    Varisteas, G.
    State, R.
    Brorsson, Mats
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Zhang, Z.
    Time series modeling of market price in real-time bidding2019In: ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN , 2019, p. 643-648Conference paper (Refereed)
    Abstract [en]

    Real-Time-Bidding (RTB) is one of the most popular online advertisement selling mechanisms. Modeling the highly dynamic bidding environment is crucial for making good bids. Market prices of auctions fluctuate heavily within short time spans. State-of-the-art methods neglect the temporal dependencies of bidders’ behaviors. In this paper, the bid requests are aggregated by time and the mean market price per aggregated segment is modeled as a time series. We show that the Long Short Term Memory (LSTM) neural network outperforms the state-of-the-art univariate time series models by capturing the nonlinear temporal dependencies in the market price. We further improve the predicting performance by adding a summary of exogenous features from bid requests.

3456789 251 - 300 of 1225
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf