kth.sePublications
Change search
Refine search result
12 1 - 50 of 60
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Aydt, H.
    et al.
    Turner, S. J.
    Cai, W.
    Low, M. Y. H.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT).
    Symbiotic simulation model validation for radiation detection applications2009In: PADS '09. ACM/IEEE/SCS 23rd Workshop on Principles of Advanced and Distributed Simulation, 2009, IEEE Computer Society, 2009, p. 11-18Conference paper (Refereed)
    Abstract [en]

    Detection of radiological dispersal devices (RDDs) is important because of their potential for destruction and psychological impact on the affected population. These devices leave a clear trace which can be followed when using appropriate detection devices. Geiger counter devices provide data regarding the radiation intensity. However, this is not enough information to pinpoint a radiation source. Neither can this information be directly used to classify the radiation source. We describe a method using symbiotic simulation which can be used to classify and localise a radiation source given accurate measurements of radiation intensities at reference points and a detailed model of the environment. Initial classification and localisation, as well as continuous tracking of a moving radiation source, is considered. The effects of a measurement error and a model error are investigated.

  • 2. Aydt, H.
    et al.
    Turner, S. J.
    Cai, W.
    Yoke Hean Low, M.
    Lendermann, P.
    Gan, B. P.
    Ayani, Russel
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Preventive what-if analysis in symbiotic simulation2008In: Proc. Winter Simul. Conf., 2008, p. 750-758Conference paper (Refereed)
    Abstract [en]

    The what-if analysis process is essential in symbiotic simulation systems. It is responsible for creating a number of alternative what-if scenarios and evaluating their performance by means of simulation. Most applications use a reactive approach for triggering the what-if analysis process. In this paper we describe a preventive triggering approach which is based on the detection of a future critical condition in the forecast of a physical system. With decreasing probability of a critical condition, using preventive what-if analysis becomes undesirable. We introduce the notion of a Gvalue and explain how this metric can be used to decide whether or not to use preventive what-if analysis. In addition, we give an example for a possible application in semiconductor manufacturing.

  • 3. Aydt, Heiko
    et al.
    Turner, Stephen J.
    Cai, Wentong
    Low, Malcolm Yoke Hean
    Ong, Yew-Soon
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Toward an Evolutionary Computing Modeling Language2011In: IEEE Transactions on Evolutionary Computation, ISSN 1089-778X, E-ISSN 1941-0026, Vol. 15, no 2, p. 230-247Article in journal (Refereed)
    Abstract [en]

    The importance of domain knowledge in the design of effective evolutionary algorithms (EAs) is widely acknowledged in the meta-heuristics community. In the last few decades, a plethora of EAs has been manually designed by domain experts for solving domain-specific problems. Specialization has been achieved mainly by embedding available domain knowledge into the algorithms. Although programming libraries have been made available to construct EAs, a unifying framework for designing specialized EAs across different problem domains and branches of evolutionary computing does not exist yet. In this paper, we address this issue by introducing an evolutionary computing modeling language (ECML) which is based on the unified modeling language (UML). ECML incorporates basic UML elements and introduces new extensions that are specially needed for the evolutionary computation domain. Subsequently, the concept of meta evolutionary algorithms (MEAs) is introduced as a family of EAs that is capable of interpreting ECML. MEAs are solvers that are not restricted to a particular problem domain or branch of evolutionary computing through the use of ECML. By separating problem-specific domain knowledge from the EA implementation, we show that a unified framework for evolutionary computation can be attained. We demonstrate our approach by applying it to a number of examples.

  • 4. Baldoni, R.
    et al.
    Di Ciccio, C.
    Mecella, M.
    Patrizi, F.
    Querzoni, L.
    Santucci, G.
    Dustdar, S.
    Li, F.
    Truong, H. -L
    Albornos, L.
    Milagro, F.
    Rafael, P. A.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT).
    Rasch, Katharina
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Lozano, M. G.
    Aiello, M.
    Lazovik, A.
    Denaro, A.
    Lasala, G.
    Pucci, P.
    Holzner, C.
    Cincotti, F.
    Aloise, F.
    An embedded middleware platform for pervasive and immersive environments for-all2009In: 2009 6th IEEE Annual Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks Workshops, SECON Workshops 2009, IEEE , 2009, p. 161-163Conference paper (Refereed)
    Abstract [en]

    Embedded systems are specialized computers used in larger systems or machines to control equipments such as automobiles, home appliances, communication, control and office machines. Such pervasivity is particularly evident in immersive realities, i.e., scenarios in which invisible embedded systems need to continuously interact with human users, in order to provide continuous sensed information and to react to service requests from the users themselves. The SM4All project investigates an innovative middleware platform for inter-working of smart embedded services in immersive and person-centric environments, through the use of composability and semantic techniques for dynamic service reconfiguration. This is applied to the challenging scenario of private houses and home-care assistance in presence of users with different abilities and needs (e.g., young, able-bodied, aged and disabled). This paper presentes a brief overview of the SM4All system architecture.

  • 5.
    Eklöf, Martin
    et al.
    Swedish Defence Research Agency (FOI).
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Moradi, Farshad
    Swedish Defence Research Agency (FOI).
    Evaluation of a Fault-Tolerance Mechanism for HLA-Based Distributed Simulations2006In: 20th Workshop on Principles of Advanced and Distributed Simulation, PADS 2006: Singapore; 24 May 2006 through 26 May 2006, 2006, p. 175-182Conference paper (Refereed)
    Abstract [en]

    Successful integration of Modeling and Simulation (M&S) in the future Network-Based Defence (NBD) depends, among other things, on providing fault-tolerant (FT) distributed simulations. This paper describes a framework, named Distributed Resource Management System (DRMS), for robust execution of simulations based on the High Level Architecture. More specifically, a mechanism for FT in simulations synchronized according to the time-warp protocol is presented and evaluated. The results show that utilization of the FT mechanism, in a worst-case scenario, increases the total number of generated messages by 68% if one fault occurs. When the FT mechanism is not utilized, the same scenario shows an increase in total number of generated messages by 90%. Considering the worst-case scenario a plausible requirement on an M&S infrastructure of the NBD, the overhead caused by the FT mechanism is considered acceptable.

  • 6.
    Eklöf, Martin
    et al.
    Swedish Defence Research Agency (FOI), Dept. of Systems Modeling.
    Moradi, Farshad
    Swedish Defence Research Agency (FOI), Dept. of Systems Modeling.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    A framework for fault-tolerance in HLA-based distributed simulations2005In: Proceedings of the 2005 Winter Simulation Conference, 2005, p. 1182-1189Conference paper (Refereed)
    Abstract [en]

    The widespread use of simulation in future military systems depends, among others, on the degree of reuse and availability of simulation models. Simulation support in such systems must also cope with failure in software or hardware. Research in fault-tolerant distributed simulation, especially in the context of the High Level Architecture (HLA), has been quite sparse. Nor does the HLA standard itself cover fault-tolerance extensively. This paper describes a framework, named Distributed Resource Management System (DRMS), for robust execution of federations. The implementation of the framework is based on Web Services and Semantic Web technology, and provides fundamental services and a consistent mechanism for description of resources managed by the environment. To evaluate the proposed framework, a federation has been developed that utilizes time-warp mechanism for synchronization. In this paper, we describe our approach to fault tolerance and give an example to illustrate how DRMS behaves when it faces faulty federates.

  • 7. Eklöf, Martin
    et al.
    Sparf, Magnus
    Moradi, Farshad
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Peer-to-peer-based resource management in support of HLA-based distributed simulations2004In: Simulation (San Diego, Calif.), ISSN 0037-5497, E-ISSN 1741-3133, Vol. 80, no 4-5, p. 181-190Article in journal (Refereed)
    Abstract [en]

    In recent years, the concept of peer-to-peer computing has gained renewed interest for sharing resources within and between organizations or individuals. This article describes a decentralized resource management system (DRMS) that uses a network of workstations for the execution and storage of high-level architecture (HLA) federations/federates in a peer-to-peer environment. The implementation of DRMS is based on the open-source project JXTA, which represents an attempt to standardize the peer-to-peer domain. DRMS is part of a Web-based simulation environment supporting collaborative design, development, and execution of HLA federations. This study evaluates the possibilities of using peer-to-peer technology for increasing the reuse and availability of simulation components within the defense modelling and simulation community. More specifically, it addresses the necessary adjustments of simulation components to conform to the requirements of the DRMS and shows that JXTA could provide the foundation for a distributed system that increases the possibilities for reusing simulation components.

  • 8.
    García Lozano, Marianela
    et al.
    FOI, Swedish Defence Research Agency, Department of Systems Modelling.
    Moradi, Farshad
    FOI, Swedish Defence Research Agency, Department of Systems Modelling.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    SDR: a semantic based Distributed Repository for Simulation Models and Resources2007In: AMS 2007: First Asia International Conference on Modelling & Simulation Asia Modelling Symposium, 2007, p. 171-176Conference paper (Refereed)
    Abstract [en]

    Recent advances in Internet, Peer-to-Peer and Grid technologies have made collaboration and resource sharing across organizational boundaries more feasible. Today, it is essential for many organizations to be able to discover share and manage distributed resources in a transparent, meaningful and secure way. A fundamental problem is locating, matching and composing resources or services of interest. In this paper we describe our initial work designing and developing a semantic based distributed repository for secure sharing of simulation models, components and related resources such as computer resources. We propose an overlay architecture which combines advances in Semantic Web, Peer-to-Peer and Grid techniques. In our project at the Swedish Defence Research Agency (FOI) we had a need for a repository of simulation related resources and having identified our requirements we found that there was no suitable of-the-shelf system available. We describe the design, tools and a prototype implementation of this system - the Semantic based Distributed Repository (SDR) and conclude with our experiences and some raised issues. We argue that although some of the used techniques still are a bit immature and need further improvements a system like the SDR has a lot of potential and can also be used in other domains than modeling and simulation.

  • 9.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Simulation-aided path planning of UAV2007In: Proceedings Of The 2007 Winter Simulation Conference: Vols 1-5, 2007, p. 1285-1293Conference paper (Refereed)
    Abstract [en]

    The problem of path planning for Unmanned Aerial Vehicles (UAV) with a tracking mission, when some a priori information about the targets and the environment is available can in some cases be addressed using simulation. Sequential Monte Carlo Simulation can be used to assess the state of the system and target when the UAV reaches the area of responsibility and during the tracking task. This assessment of the future is then used to compare the impact of choosing different alternative paths on the expected value of the detection time. A path with a lower expected value of detection time is preferred. In this paper the details of this method is described. Simulations are performed by a special purpose simulation tool to show the feasibility of this method and compare it with an exhaustive search.

  • 10.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    UAV Path Planning in Search Operations2009In: Aerial Vehicles / [ed] Thanh Mung Lam, InTech , 2009, p. 331-344Chapter in book (Other academic)
    Download full text (pdf)
    fulltext
  • 11.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Using on-line simulation for adaptive path planning of UAVs2007In: DS-RT 2007: 11TH IEEE INTERNATIONAL SYMPOSIUM ON DISTRIBUTED SIMULATION AND REAL-TIME APPLICATIONS, PROCEEDINGS / [ed] Roberts, DJ; Theodoropoulos, GK; ElSaddik, A, LOS ALAMITOS: IEEE COMPUTER SOC , 2007, p. 167-174Conference paper (Refereed)
    Abstract [en]

    In a surveillance mission, the task of Unmanned. Aerial Vehicles (UAV) path planning can in some cases be addressed using Sequential Monte Carlo (SMC) simulation. If sufficient a priori information about the target and the environment is available an assessment of the future state of the target is obtained by the SMC simulation. This assessment is used in a set of "what-if" simulations to compare different alternative UAV paths. In a static environment this simulation can be conducted prior to the mission. However if the environment is dynamic, it is required to run the "what-if" simulations on-line i.e. in real-time. In this paper the details of this on-line simulation approach in UAV path planning is studied and its performance is compared with two other methods: an off-line simulation aided path planning and an exhaustive search method. The conducted simulations indicate that the on-line simulation has generally a higher performance compared with the two other methods.

  • 12.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Karimson, Anvar
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Optimizing a Business Process Model by Using Simulation2010In: 2010 IEEE Workshop on Principles of Advanced and Distributed Simulation, IEEE Press, 2010, p. 1-8Conference paper (Refereed)
    Abstract [en]

    In this paper we present the problem of optimizing a business process model with the objective of finding the most beneficial assignment of tasks to agents, without modifying the structure of the process itself. The task assignment problem for four types of processes are distinguished and algorithms for finding optimal solutions to them are presented: 1) a business process with a predetermined workflow, for which the optimal solution is conveniently found using the well-known Hungarian algorithm. 2) a Markovian process, for which we present an analytical method that reduces it to the first type. 3) a nonMarkovian process, for which we employ a simulation method to obtain the optimal solution. 4) the most general case, i.e. a nonMarkovian process containing critical tasks. In such processes, depending on the agents that perform critical tasks the workflow of the process may change. We introduce two algorithms for this type of processes. One that finds the optimal solution, but is feasible only when the number of critical tasks is few. The second algorithm is even applicable to large number of critical tasks but provides a near-optimal solution. In the second algorithm a hill-climbing heuristic method is combined with Hungarian algorithm and simulation to find an overall near-optimal solution for assignments of tasks to agents. The results of a series of tests that demonstrate the feasibility of the algorithms are included.

    Download full text (pdf)
    Optimizing a Business Process Model by Using Simulation
  • 13.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Moradi, Farshad
    Swedish Defense Research Agency (FOI).
    A framework for simulation-based optimization of business process models2012In: Simulation (San Diego, Calif.), ISSN 0037-5497, E-ISSN 1741-3133, Vol. 88, no 7, p. 852-869Article in journal (Refereed)
    Abstract [en]

    The Assignment Problem is a classical problem in the field of combinatorial optimization, having a wide range of applications in a variety of contexts. In general terms, the Assignment Problem consists of determining the best assignment of tasks to agents according to a predefined objective function. Different variants of the Assignment Problem have been extensively investigated in the literature in the last 50 years. In this work, we introduce and analyze the problem of optimizing a business process model with the objective of finding the most beneficial assignment of tasks to agents. Despite similarities, this problem is distinguished from the traditional Assignment Problem in that we consider tasks to be part of a business process model, being interconnected according to defined rules and constraints. In other words, assigning a business process to agents is a more complex form of the Assignment Problem. Two main categories of business processes, assignment-independent and assignment-dependent, are distinguished. In the first category, different assignments of tasks to agents do not affect the flow of the business process, while processes in the second category contain critical tasks that may change the workflow, depending on who performs them. In each category several types of processes are studied. Algorithms for finding optimal and near-optimal solutions to these categories are presented. For the first category, depending on the type of process, the Hungarian algorithm is combined with either the analytical method or simulation to provide an optimal solution. For the second category, we introduce two algorithms. The first one finds an optimal solution, but is feasible only when the number of critical tasks is small. The second algorithm is applicable to large number of critical tasks, but provides a near-optimal solution. In the second algorithm a hill-climbing heuristic method is combined with the Hungarian algorithm and simulation to find an overall near-optimal solution. A series of tests is conducted which demonstrates that the proposed algorithms efficiently find optimal solutions for assignment-independent and near-optimal solutions for assignment-dependent processes.

  • 14.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Moradi, Farshad
    Swedish Defense Research Agency (FOI).
    A Model for Estimating the Performance of a Team of Agents2011In: Proceedings of the 2011 IEEE International Conference on Systems, Man and Cybernetics (SMC 2011), IEEE Press, 2011, p. 2393-2400Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a model for estimatingthe performance of a team of agents, based on the capabilities of the agents and importance of these capabilities for the task. Performance of a team is assumed to be the sum of contributions of individual agents and contributions of subgroups built in the team. We introduce a set of notations, which is required for discussing the suggested models. We also propose a model to estimate the benefit of an agent from interaction with other agents in a subgroup. Based on this benefit model and different (common) strategies, the agents devise plans in which they formulate to what extent they are willing to cooperate with other agents. A negotiation algorithm that resolves the conflicts between the desires of the agents is presented. The effect of this algorithm and different strategies are tested on a set of generated data. The test results show that the performance of a team when the agents choose a cooperation strategy that follows the principle of least effort (Zipf’s law) is higher than teams with other cooperation strategies.

    Download full text (pdf)
    A Model for Estimating the Performance of a Team of Agents
  • 15.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Moradi, Farshad
    Holm, Gunnar
    Estimating performance of a business process model2009In: Winter Simulation Conference, 2009, p. 2828-2839Conference paper (Refereed)
    Abstract [en]

    In this paper we suggest a model for estimating performance of human organizations and business processes. This model is based on subjective assessment of the capabilities of the available human resources, the importance of these capabilities, and the influence of the peripheral factors on the resources. The model can be used to compare different resource allocation schemes in order to choose the most beneficial one. We suggest an extension to Business Process Modeling Notation (BPMN) by including performance measure of performers and the probability by which an outgoing Sequence Flow from a Gateway is chosen. We also propose an analytical method for estimating the overall performance of BPMN in simple cases and a simulation method, which can be used for more complicated scenarios. To illustrate how these methods work, we apply them to part of a military Operational Planning Process and discuss the results.

  • 16.
    Kamrani, Farzad
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Garcia Lozano, Marianela
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Path planning for UAVs using symbiotic simulation2006In: Modelling and Simulation 2006 / [ed] Nketsa, A; Paludetto, M; Bertelle, C, 2006, p. 207-213Conference paper (Refereed)
    Abstract [en]

    The problem of efficient path planning for Unmanned Aerial Vehicles (UAV) with a surveillance mission in a dynamic environment can in some cases be solved using Symbiotic Simulation (S2), i.e. an on-line simulation that interacts in real-time with the UAV and chooses its path. Sequential Monte Carlo Simulation, known also as Particle Filtering (PF) is an instance of such a simulation. In this paper we describe a methodology and an algorithm to use PF for efficient path planning of a UAV which searches a road network for a target. To verify whether this method is feasible and to supply a tool to compare different methods a simulator is developed. This simulator and its features are presented in this paper as well.

  • 17.
    Khatib, Iyad Al
    et al.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Maguire Jr., Gerald Q.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Wireless LAN Access Points Uplink and Downlink Delays: Packet Service-Time Comparison2002In: Proceedings of 16th Nordic Teletraffic Seminar, 2002, 2002, p. 253-264Conference paper (Refereed)
    Abstract [en]

    Wireless LAN access points are very important connecting nodes transferring traffic between two media in opposite directions. Hence the performance of the wireless LAN access point should be looked upon from two different reference points: uplink (from WLAN to Ethernet) and Downlink (from Ethernet to WLAN). This paper builds on our previous modeling of the wireless access point as a single server, FIFO, queuing system to analyze the service times in both directions. The previous analysis showed that the average service time is a function of payload. Measurements have revealed that the uplink service time is much smaller than the downlink service time for the same payload. In this paper, we investigate the absolute value of the difference between the uplink and downlink service-times. We refer to the absolute value of the difference in time between uplink and downlink as the UDC, or the"Uplink-Downlink Contrast". Results show that as the packet size increases, the UDC either decreases or increases monotonically depending on the brand of the access point. For a decreasing UDC, the absolute value of the difference between the uplink and downlinkservice-times decreases, hence the UDC is convergent. Similarly, the UDC is divergent if it increases with increasing packet size. These results can be used to select a WLAN accesspoint given the size of packets transmitted by an application or multiple applications over a Local Area Network.

  • 18.
    Khatib, Iyad Al
    et al.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Maguire Jr., Gerald Q.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Forsgren, Daniel
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    MobiCom poster: wireless LAN access points as queuing systems: performance analysis and service time2003In: ACM SIGMOBILE Mobile Computing and Communications Review, ISSN 1559-1662, Vol. 7, no 1, p. 28-30Article in journal (Refereed)
    Abstract [en]

    Since the approval of the IEEE 802.11b by the IEEE in 1999, the demand for WLAN equipment and networks has been growing quickly. We present a queuing model of wireless LAN (WLAN) access points (APs) for IEEE 802.11b. We use experimentation to obtain the characteristic parameters of our analytic model. The model can be used to compare the performance of different WLAN APs as well as the QoS of different applications in the presence of an AP. We focus on the delay introduced by an AP. The major observations are that the delay to serve a packet going from the WLAN medium to the wired medium (on the uplink) is less than the delay to serve a packet, with identical payload, but travelling from the wired medium to the WLAN medium (on the downlink). A key result is an analytic solution showing that the average service time of a packet is a strictly increasing function of payload.

  • 19.
    Khatib, Iyad Al
    et al.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Maguire Jr., Gerald Q.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Forsgren, Daniel
    Wireless LAN Access Points as a Queuing System2002In: Proceedings of The Communications and Computer Networks 2002 Conference (CCN 2002), 2002, p. 463-468Conference paper (Refereed)
    Abstract [en]

    This paper presents a research study of wireless LAN access points for IEEE 802.11b, where we seek to model the access point as a queuing system. The model can be used to compare performance metrics of different wireless LAN access points and to investigate the QoS of specific applications in the presence of a wireless LAN access point. In this paper, we focus on two parameters: the delay introduced by a wireless LAN access point and the average service time required to serve a packet passing through an access point. A major result is an analytic solution for the average service time of a packet in relationship to payload.

  • 20.
    Khatib, Iyad Al
    et al.
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Maguire Jr., Gerald Q.
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Forsgren, Daniel
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Wireless LAN Access Points as a Queuing System, Performance Analysis and Service Time2002In: The Eighth ACM International Conference on Mobile Computingand Networking (ACM MOBICOM 2002 Conference), Association for Computing Machinery (ACM), 2002Conference paper (Refereed)
  • 21. Li, F.
    et al.
    Rasch, Katharina
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Sehic, S.
    Dustdar, S.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Unsupervised context-aware user preference mining2013In: Proceeding of Workshop on Activity Context-Aware System Architectures at the 27th AAAI Conference on Artificial Intelligence, 2013, p. 36-43Conference paper (Refereed)
    Abstract [en]

    In pervasive environments, users are situated in rich context and can interact with their surroundings through various services. To improve user experience in such environments, it is essential to find the services that satisfies user preferences in certain context. Thus the suitability of discovered services is highly dependent on how much the context-aware system can understand users' current context and preferred activities. In this paper, we propose an unsupervised learning solution for mining user preferences from the user's past context. To cope with the high dimensionality and heterogeneity of context data, we propose a subspace clustering approach that is able to find user preferences identified by different feature sets. The results of our approach are validated by a series of experiments.

  • 22.
    Li, Fei
    et al.
    Vienna University of Technology, Austria.
    Rasch, Katharina
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Truong, Hong-Linh
    Vienna University of Technology, Austria.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Dustdar, Schahram
    Vienna University of Technology, Austria.
    Proactive Service Discovery in Pervasive Environments2010In: Proceedings of the 7th ACM International Conference on Pervasive Services (ICPS), 2010, p. 126-133Conference paper (Refereed)
    Abstract [en]

    Pervasive environments are characterized by rich and dy-namic context, where users need to be continuously informed about services relevant to their current context. Implicit discovery requests, triggered by changes of user context, avail-able services, or user preferences are prevalent in such environments.This paper proposes a proactive service discovery approach for pervasive environments to address these implicit requests. Services and user preferences are described by a formal context model, which effectively captures the dynamics of context and the relationship between services and users. Based on the model, we propose a proactive discovery algorithm to continuously present the most relevant services to the user in response to changes of context, services or user preferences. Numeric coding methods are applied in different phases of the algorithm to improve its performance. A proactive service discovery system is proposed and the context model is grounded in a smart home environment. Experimental results show that our approach can efficiently provide the user with up-to-date information about useful services.

  • 23. Liljenstam, M.
    et al.
    Ronngren, R.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Partitioning WCN models for parallel simulation of radio resource management2001In: Wireless networks, ISSN 1022-0038, E-ISSN 1572-8196, Vol. 7, no 3, p. 307-324Article in journal (Refereed)
    Abstract [en]

    Parallel Simulation techniques have been proposed as a possible solution to execution time and memory constraints often found in detailed simulations of Wireless Cellular Networks. However, partitioning represents a major challenge for models that encompass elements of radio propagation phenomena. This paper discusses the partitionings problem with respect to Parallel Discrete Event Simulation and we formulate an approach to study partitioning of a WCN model that includes radio propagation. Various options for a model of moderate size and where interference is calculated over the whole system are evaluated through experimentation and some limited mathematical analysis. Results indicate that radio spectrum based partitioning is preferable to geographically based partitionings for this model in many realistic scenarios. It is also noted that characteristics of the model differ sufficiently from other previously studied spatially explicit problems to reduce or even annihilate the effectiveness of some commonly used partitioning techniques.

  • 24.
    Mahmood, Imran
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Moradi, F.
    Verifying dynamic semantic composability of BOM-based composed models using colored petri nets2012In: Principles of Advanced and Distributed Simulation (PADS), 2012 ACM/IEEE/SCS 26th Workshop on, IEEE , 2012, p. 250-257Conference paper (Refereed)
    Abstract [en]

    Model reuse is a promising and appealing convention for effective development of simulation systems as it offers reduction in development cost and time. Various methodological advances in this area have given rise to the development of different component reusability frameworks such as BOM (Base Object Model). But lack of component matching and weak support for compos ability verification and validation, in these frameworks, makes it difficult to achieve effective and meaningful reuse. In this paper we focus on Compos ability verification and propose a process to verify BOM based composed model at dynamic semantic level. We suggest an extension to the BOM components, to capture behavior at a greater detail. Then we transform the extended BOM into our proposed Colored Petri Nets (CPN) based component model so that the components can be composed and executed at an abstract level. Subsequently we advocate to use CPN tools and analysis techniques to verify that the model satisfy given requirements. We classify the properties of a system among different groups and express the model's requirements by selecting some of the properties from these groups to form requirement specification. Also we present an example of a Field Artillery model, in which we select a set of properties as requirement specification, and explain how CPN state-space analysis technique is used to verify the required properties. Our experience confirms that CPN tools provide strong support for verification of composed models.

  • 25.
    Mahmood, Imran
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Moradi, Farshad
    Swedish Defense Research Agency (FOI).
    Behavioral verification of BOM based composed models2010In: 22th European Modeling and Simulation Symposium, EMSS 2010 / [ed] Agostine Bruzzone, Claudia Frdman, Francesco Longo, Khalid Mekouar, Miquel Angel Piera, Fes, Moroco: ESISA Ecole Supérieure d'Ingénierie en Sciences Appliquées , 2010, p. 341-350Conference paper (Refereed)
    Abstract [en]

    A verified composition of predefined reusable simulation components such as BOM (Base Object Model) plays a significant role in saving time and cost in the development of various simulations. BOM represents a reusable component framework and posses the ability to rapidly compose simulations but lacks semantic and behavioral expressiveness required to match components for a suitable composition. Moreover external techniques are required to evaluate behavioral verification of BOM based components. In this paper we discuss behavioral verification and propose an approach to verify the dynamic behavior of a set of composed BOM components against given specifications. We further define a Model Tester that provides means to verify behavior of a composed model during its execution. We motivate our verification approach by suggesting solutions for some of the categories of system properties. We also provide a case study to clarify our approach.

  • 26.
    Mahmood, Imran
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Moradi, Farshad
    Swedish Defense Research Agency (FOI).
    Composability Test of BOM based models using Petri Nets2010In: Proceedings of the 22nd IFIP International Conferenceon Testing Software and Systems: Short Papers / [ed] Alexandre Petrenko, Adenilso Simão, José Carlos Maldonado, Montreal, QC Canada: CRIM (Centre de recherche informatique de Montréal) , 2010, p. 7-12Conference paper (Refereed)
    Abstract [en]

    Reusability is a widely used concept which has recently receivedrenewed attention to meet the challenge of reducing cost and timeof simulation development. An approach to achieve effective reusabilityis through composition of predefined components which is promising buta daunting challenge in the research community. Base Object Model(BOM) is a component-based standard designed to support reusabilityand composability in distributed simulation community. BOM providesgood model representation for component reuse however this frameworklacks capability to express semantic and behavioral matching at the conceptuallevel. Furthermore there is a need for a technique to test thecorrectness of BOM-compositions in terms of structure and behavior. Inthis paper we discuss verification of BOM based model and test its suitabilityfor the intended purpose and objectives. We suggest a techniquethrough which the composed model can automatically be transformedinto a single Petri Net (PN) model and thus can further be verified usingdifferent existing PN analysis tools. We further motivate our approachby suggesting a deadlock detection technique as an example, and providea case study to clarify our approach.

  • 27.
    Mahmood, Imran
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Moradi, Farshad
    Swedish Defence Research Agency, FOI.
    Composability Verification of Real Time System Models Using Colored Petri Nets2013In: Proceedings - UKSim 15th International Conference on Computer Modelling and Simulation, UKSim 2013, New York: IEEE , 2013, p. 407-412Conference paper (Refereed)
    Abstract [en]

    The discipline of component based modeling and simulation offers promising gains including reduction in development cost, time, and system complexity. It also promotes (re)use of modular components to build complex simulations. Many important issues in this area have been addressed, but composability verification is still considered a daunting challenge. In our observation most of the component based modeling frameworks possess weak built-in support for the composability verification, which is required to guarantee the correctness of the structural, behavioral and temporal aspects of the composition. In this paper we stage a practical approach to alleviate some of the challenges in composability verification and propose a process to verify composability of real-time system models. We emphasize on dynamic semantic level and present our approach using Colored Petri Nets and State Space analysis. We also present a Field Artillery model as an example of real-time system and explain how our approach verifies model composability.

  • 28.
    Mahmood, Imran
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Moradi, Farshad
    Swedish Defence Research Agency, FOI.
    Fairness Verification of BOM-Based Composed Models Using Petri Nets2011In: Proceedings of the 2011 IEEE Workshop on Principles of Advanced and Distributed Simulation, Washington, DC, USA: IEEE Computer Society , 2011, p. 5936770-Conference paper (Refereed)
    Abstract [en]

    Model reuse is a promising and appealing convention for effective development of simulation systems. However it poses daunting challenges to various issues in research such as Reusability and Composability in model integration. Various methodological advances in this area have given rise to the development of different component reusability frameworks such as BOM (Base Object Model). However, lack of component matching and support for composability verification and validation makes it difficult to achieve effective and meaningful reuse. For this reason there is a need for adequate methods to verify and validate composability of a BOM based composed model. A verified composed model ensures the satisfaction of desired system properties. Fairness, as defined in section II, is an important system property which ensures that no component in a composition is delayed indefinitely. Fairness in a composed model guarantees the participation of all components in order to achieve the desired objectives. In this paper we focus on verification and propose to transform a composed BOM into a Petri Nets model and use different analysis techniques to perform its verification. We propose an algorithm to verify fairness property and provide a case study of a manufacturing system to explain our approach.

  • 29.
    Mahmood, Imran
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Moradi, Farshad
    Statemachine Matching in BOM based model Composition2009In: IEEE ACM DIS SIM REAL TIME / [ed] Turner SJ, Roberts D, Cai W, ElSaddik A, Los Alamitos, CA: IEEE COMPUTER SOC , 2009, p. 136-143Conference paper (Refereed)
    Abstract [en]

    Base Object Model (BOM) is a component-based standard designed to support reusability and Composability. Reusability helps in reducing time and cost of the development of a simulation process. Composing predefined components such as BOMs is a well known approach to achieve reusability. However, there is a need for a matching mechanism to identify whether a set of components are composable or not. Although BOM provides good model representation, it lacks capability to express semantic and behavioral matching. In this paper we propose an approach for matching behavior of BOM components by matching their statemachines. Our proposed process includcs a static and a dynamic matching phase. In the static matching phase, we apply a set of rules to validate the structure of statemachines. In the dynamic matching phase, we execute the statemachines together at an abstract level on our proposed execution framework. We have developed this, framework using the Slate Chart Extensible Markup Language (SCXML), which is a W3C compliant standard. If the execution terminates successfully (i.e. reaches specified final stales) we conclude that there is a positive match and the behavior of these BOMs is composable. We describe the matching process and the implementation of our runtime environment in detail and present a case study as proof of concept.

  • 30. March, V.
    et al.
    Teo, Y. M.
    Lim, H. B.
    Eriksson, Peter
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Collision detection and resolution in hierarchical peer-to-peer systems2005In: Proceedings - Conference on Local Computer Networks, LCN, 2005, p. 2-9Conference paper (Refereed)
    Abstract [en]

    Structured peer-to-peer systems can be organized hierarchically as two-level overlay networks. The top-level overlay consists of groups of nodes, where each group is identified by a group identifier. In each group, one or more nodes are designated as supernodes and act as gateways to the nodes at the second level. A collision occurs during join operations, when two or more groups with the same group identifier are created at the top-level overlay. Collisions increase the lookup path length and the stabilization overhead, and reduce the scalability of hierarchical peer-to-peer systems. We propose a new scheme to detect and resolve collisions, and we study the impact of the collision problem on the performance of peer-to-peer systems. Our simulation results show the effectiveness of our scheme in reducing collisions and maintaining the size of the top-level overlay close to the ideal size.

  • 31. Moradi, F.
    et al.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Mokarizadeh, Shahab
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Tan, G.
    A rule-based semantic matching of base object models2009In: International Journal of Simulation and Process Modelling, ISSN 1740-2123, E-ISSN 1740-2131, Vol. 5, no 2, p. 132-145Article in journal (Refereed)
    Abstract [en]

    Creating simulation models via composition of predefined and reusable components is an efficient way of reducing costs and time associated with the simulation model development. However, to successfully compose models one has to solve the issues of syntactic and semantic composability of components. The Base Object Model (BOM) standard is an attempt to ease reusability and composition of simulation models. However, the BOM does not contain sufficient information for defining necessary concepts and terms to avoid ambiguity, and neither does it have any method for dynamic aspects matching conceptual models (i.e., their state-machines). In this paper, we present our approach for enhancement of the semantic contents of BOMs and propose a three-layer model for syntactic and semantic matching of BOMs. The enhancement includes ontologies for entities, events and interactions in each component. We also present an OWL-S description for each component, including the state-machines. To test our approach, we specify some simulation scenarios and implement BOMs as building blocks for development of those scenarios, one of which is presented in this paper. We also define composability degree, which quantifies closeness of the composed model to a given model specification. Our results show that the three-layer model is promising and can improve and simplify the composition of BOM-based components.

  • 32.
    Moradi, Farshad
    et al.
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Parallel and distributed simulation2003In: Applied system simulation: methodologies and applications / [ed] Mohammad Salameh Obaidat,Georgios I. Papadimitriou, Kluwer , 2003, p. 457-486Chapter in book (Other academic)
  • 33.
    Moradi, Farshad
    et al.
    Swedish Defence Research Agency, FOI.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Mahmood, Imran
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    An Agent-based Environment for Simulation Model Composition2008In: Proceedings of the 22nd ACM/IEEE/SCS Workshop on Principles of Advanced and Distributed Simulation PADS, 2008, p. 175-184Conference paper (Refereed)
    Abstract [en]

    As Modelling and Simulation gains more popularity, the demand on reducing time and resource costs associated with development and validation of simulation models has also increased. Composing simulation models of reusable and validated simulation components is one approach for addressing the above demand. This approach requires a composition process that is able to support a modeller with discovery and identification of components as well as giving feedback on feasibility of a composition.

    Software agents are programs that can with some degree of autonomy perform tasks on behalf of a user or another program. In a Multi Agent System (MAS) autonomous agents interact and collaborate with each other in order to solve complex problems that are beyond the individual capabilities or knowledge of each agent, thus providing modularity and scalability. The objective of this work has been to develop a Multi Agent System for discovery and composition of BOM (Base Object Model) based simulation models, which provides the flexibility and adaptability to test and assess, amongst others different discovery and composition methods and techniques.

    The MAS that we developed is based on the JACK (TM) Intelligent Agents and executes a rule-based process for discovery and composition of BOMB. Our preliminary results indicate its feasibility, portability, adaptability and flexibility.

  • 34.
    Moradi, Farshad
    et al.
    Swedish Defence Research Agency, FOI.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Mokarizadeh, Shahab
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Akbari Shahmirzadi, Gholam Hossein
    KTH, School of Information and Communication Technology (ICT).
    Tan, Gary
    National University of Singapore.
    A rule-based approach to syntactic and semantic composition of BOMs2007In: 11th IEEE International Symposium on Distrubuted Simulation and Real Time Applications, 2007, p. 145-155Conference paper (Refereed)
    Abstract [en]

    Creating simulation models via composition of predefined and reusable components is an efficient way of reducing costs and time associated with the simulation model development process. However, in order to successfully compose models one has to solve the issues of syntactic and semantic composability of components. HLA is the most widely used architecture for distributed simulations today. It provides a simulation environment and standards for specifying simulation parts and interactions between simulation parts. But it provides little support for semantic composability. The Base Object Model (BOM) standard is an attempt to ease reusability and composition of simulation models. However, BOMs do not contain sufficient information for defining concepts and terms in order to avoid ambiguity, and provide no methods for matching conceptual models (state machines).

    In this paper, we present our approach for enhancement of the semantic contents of BOMs and propose a three-layer model for syntactic and semantic matching of BOMs. The semantic enhancement includes ontologies for entities, event and interactions in each component. We also present an OWL-S description for each component including the state machines. The three-layer model, consists of syntactic matching, static semantic matching and dynamic semantic matching utilising a set of rules for reasoning about the compositions. We also describe our discovery and matching rules, which have been implemented in the Jess inference engine. In order to test our approach we have defined some simulation scenarios and implemented BOMs as building blocks for development of those scenarios, one of which has been presented in this paper. Our result shows that the three-layer model is promising and can improve and simplify composition of BOM-based components.

  • 35.
    Moradi, Farshad
    et al.
    School of Computing, National University of Singapore.
    Ayani, Rassul
    School of Computing, National University of Singapore.
    Tan, Gary
    School of Computing, National University of Singapore.
    Some Ownership Management Issues in Distributed Simulation Using HLA/RTI2001In: Parallel and Distributed Computing Practices, ISSN 1097-2803, Vol. 4, no 1Article in journal (Refereed)
    Abstract [en]

    To study the High Level Architecture (HLA) and the services that are provided by the Runtime Infrastructure (RTI), in particular Object Management and Ownership Management, we have developed a distributed air traffic controller simulator. In our simulation model, each airport is represented by a federate and controls a number of aircraft. The control of the aircraft is transferred among airports as the aircraft fly to different airports. In this paper we discuss two different approaches that we have used to facilitate the exchange of ownership of aircraft attributes among federates, namely the pull and the negotiated push method. We present a comparison of the two methods and also discuss the problems associated with each method and our approach to resolving them. These problems include the oscillation effect, which causes aircraft attributes to be pulled back and forth between federates, and the pending attribute acquisition requests, which result in loss of aircraft (or unattended aircraft attributes). We have experienced that in such scenarios, the push method is more efficient and accurate. We also present our experiences and observations from our experimentation with the RTI. We have noticed some shortcomings in the current RTI interface specification that we will discuss in the paper.

  • 36.
    Moradi, Farshad
    et al.
    FOI.
    Nordvaller, Peder
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Simulation Model Composition using BOMs2006In: 10th IEEE International Symposium on Distributed Simulation and Real Time Applications, 2006, p. 242-249Conference paper (Refereed)
    Abstract [en]

    The Base Object Model, BOM, is a new standard for defining reusable and composable simulation components. The introduction of BOMs into the simulation community opens zip the possibility of component based simulation development approach that is faster and more efficient than today's simulation creation process. In this paper we describe a process that has been developed at the Swedish Defence Research Agency (FOI) with the aim to speed up and improve the development of simulation models. This process utilizes the BOM concept coupled with Ontologies in simulation development, and employs SRML (Simulation Reference Markup Language) as a means to define a component based simulation on a high-level. We will present out, experimental results and findings based on our implementation of the proposed process. Our experience indicates that including ontological information in BOMs will further increase their usability.

  • 37.
    Rasch, Katharina
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Li, F.
    Sehic, S.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Dustdar, S.
    Automatic description of context-altering services through observational learning2012In: Pervasive Computing, Springer Berlin/Heidelberg, 2012, Vol. 7319 LNCS, p. 461-477Conference paper (Refereed)
    Abstract [en]

    Understanding the effect of pervasive services on user context is critical to many context-aware applications. Detailed descriptions of context-altering services are necessary, and manually adapting them to the local environment is a tedious and error-prone process. We present a method for automatically providing service descriptions by observing and learning from the behavior of a service with respect to its environment. By applying machine learning techniques on the observed behavior, our algorithms produce high quality localized service descriptions. In a series of experiments we show that our approach, which can be easily plugged into existing architectures, facilitates context-awareness without the need for manually added service descriptions.

  • 38.
    Rasch, Katharina
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Li, Fei
    Vienna University of Technology, Austria.
    Sehic, S.
    Vienna University of Technology, Austria.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Dustdar, S.
    Context-driven personalized service discovery in pervasive environments2011In: World wide web (Bussum), ISSN 1386-145X, E-ISSN 1573-1413, Vol. 14, no 4, p. 295-319Article in journal (Refereed)
    Abstract [en]

    Pervasive environments are characterized by a large number of embedded devices offering their services to the user .Which of the available services are of most interest to the user considerably depends on the user’s current context. User context is often rich and very dynamic; making an explicit, user-driven discovery of services impractical. Users in such environments would instead like to be continuously informed about services relevant to them. Implicit discovery requests triggered by changes in the context are therefore prevalent. This paper proposes a proactiveservice discovery approach for pervasive environments addressing these implicit requests. Services and user preferences are described by a formal context modelcalled Hyperspace Analogue to Context, which effectively captures the dynamics of context and the relationship between services and context. Based on the model, we propose a set of algorithms that can continuously present the most relevant services to the user in response to changes of context, services or user preferences. Numeric coding methods are applied to improve the algorithms’ performance. The algorithms are grounded in a context-driven service discovery system that automatically reacts to changes in the environment. New context sources and services can be dynamically integrated into the system. A client for smart phones continuously informs users about the discovery results. Experiments show, that the system can efficiently provide the user with continuous, up-to-date information about the most useful services in real time.

  • 39. Shell, Y. H.
    et al.
    Wentong, C.
    JohnTurner, S.
    Hsu, W. J.
    Suiping, Z.
    Low, M. Y. H.
    Fujimoto, R.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    A generic symbiotic simulation framework2006In: Proceedings - Workshop on Principles of Advanced and Distributed Simulation, PADS, 2006Conference paper (Refereed)
    Abstract [en]

    A symbiotic or online simulation is defined as one that interacts with the physical system in a mutually beneficial way. The simulation is driven by real time data collected from a physical system under control and needs to meet the real-time requirements of the physical system. In turn, the results from the "what-if experiments performed by the simulator can be used to control the dynamic behaviour of the physical system. Such a simulation tool is for real-time planning and is to foresee and advise on real time problems. It aims to improve performance, to adapt to sudden and unexpected events and to improve aspects of safety and security of the physical system. There are some research efforts, for example, the Dynamic Data-Driven Application Systems (DDDAS) which are currently advocated by the National Science Foundation (NSF) of the United States. However, work in this area is far from complete and many of the research issues are not fully addressed. We see the need for a general framework for symbiotic simulation. In this project, simulators based on the same general framework will be developed using symbiotic simulation techniques and will be used to provide adaptive decision support to manage the resources in several application environments. The objectives of this project are: To develop a generic, agent-based, symbiotic simulation system architecture. To develop mechanisms to support dynamic coupling between the symbiotic simulation system and the physical system. To conduct pilot case studies of the simulation framework. To explore and evaluate the service oriented architecture approach for parallel simulation in the multi-agent symbiotic simulation framework.

  • 40.
    Skogh, Hans-Emil
    et al.
    KTH, School of Information and Communication Technology (ICT).
    Haeggström, Jonas
    KTH, School of Information and Communication Technology (ICT).
    Ghodsi, Ali
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Fast freenet: Improving freenet performance by preferential partition routing and File Mesh propagation2006Conference paper (Refereed)
    Abstract [en]

    The Freenet Peer-to-Peer network is doing a good job in providing anonymity to the users. But the performance of the network in terms of download speed and request hit ratio is not that good. We propose two modifications to Freenet in order to improve the download speed and request hit ratio for all participants. To improve download speed we propose Preferential Partition Routing, where nodes are grouped according to bandwidth and slow nodes are discriminated when routing. For improvements in request hit ratio we propose File Mesh propagation where each node sends fuzzy information about what documents itposesses to its neigbors. To verify our proposals we simulate the Freenet network and the bandwidth restrictions present between nodes as well as using observed distributions for user actions to show how it affects the network. Our results show an improvement of the request hit ratio by over 30 times and an increase of the average download speed with six times, compared to regular Freenet routing.

  • 41. Ta, D. N. B.
    et al.
    Zhou, S.
    Cai, W.
    Tang, X.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT).
    Efficient zone mapping algorithms for distributed virtual environments2009In: PADS 2009: 23rd Workshop on Principles of Advanced and Distributed Simulation, Proceedings, 2009, p. 137-144Conference paper (Refereed)
    Abstract [en]

    This paper deals with the zone mapping problem in large-scale distributed virtual environments (DVEs), e.g., massively multi-player online games, distributed military simulations, etc. To support such large-scale DVEs with real-time interactions among thousands of concurrent, geographically separated clients, a distributed server infrastructure is generally needed, and the virtual world can be partitioned into several distinct zones to distribute the load among the servers. The NP-hard zone mapping problem concerns how to assign the zones of the virtual world to a number of distributed servers to improve interactivity. In this paper, we propose new zone mapping algorithms based on a Linear Programming relaxation of the original problem and meta-heuristics such as local search and evolutionary optimization techniques. We conducted extensive experiments with realistic Internet latency models obtained from real measurements using millions of pairs of geographically distributed IP addresses. The results have shown that our newly proposed algorithms significantly improved the performance of large-scale DVEs in terms of overall interactivity, when compared with existing mechanisms.

  • 42. Ta, D. N. B.
    et al.
    Zhou, S.
    Cai, W.
    Tang, X.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    QoS-aware server provisioning for large-scale distributed virtual environments2010In: Proceedings - Workshop on Principles of Advanced and Distributed Simulation, PADS, 2010, p. 23-30Conference paper (Refereed)
    Abstract [en]

    Maintaining interactivity is one of the key challenges in distributed virtual environments (DVE) due to the large, heterogeneous Internet latency and the fact that clients in a DVE are usually geographically separated. Previous work in this area have dealt with optimizing interactivity performance given limited server resource. In this paper, we consider a new problem, termed the performance-constrained server provisioning, whose goal is to minimize the resource needed to achieve a predetermined level of Quality of Service (QoS). We identify and formulate two variants of this new problem and show that they are both NP-hard via reductions to the set covering problem. We also propose several computationally efficient approximation algorithms for solving the problem. Via extensive simulation study, we show that the newly proposed algorithms that take into account inter-server dependencies significantly outperform the well-known set covering algorithm for both problem variants.

  • 43. Ta, D.
    et al.
    Nguyen, T.
    Zhou, S.
    Tang, X.
    Cai, W.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    A framework for performance evaluation of large-scale interactive distributed virtual environments2010In: Proceedings - 10th IEEE International Conference on Computer and Information Technology, CIT-2010, 7th IEEE International Conference on Embedded Software and Systems, ICESS-2010, ScalCom-2010, 2010, p. 2744-2751Conference paper (Refereed)
    Abstract [en]

    Interactivity is a key consideration in large-scale distributed virtual environments (DVEs), such as massively multi-player online games. Due to the Internet dynamics as well as the large-scale and distributed nature of many DVEs, it is difficult to quantify the possible improvements claimed by various DVE interactivity enhancement approaches. In this paper, we propose a scalable framework that supports evaluation of interactivity indicators, under dynamic Internet conditions. We have developed and used this framework to conduct extensive experiments on Planet-Lab with many distributed client and server locations. We have collected the latency experienced by over 700 client locations distributed around the globe over a few days; and used it for evaluating performance of some recently proposed zone mapping algorithms.

  • 44. Ta, D.
    et al.
    Zhou, S.
    Cai, W.
    Tang, X.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Network-aware server placement for highly interactive distributed virtual environments2008Conference paper (Refereed)
    Abstract [en]

    In distributed virtual environments, e.g., online gaming, collaborative designs and distributed military simulations, interactivity is one of the most important requirements. The users may notice serious degradations in quality of service when interacting in the virtual world if the response from the system is much slower than what they have experienced in real life. In this paper, we consider the problem of placing distributed servers in the network to reduce clientserver communication latencies, which is termed the server placement problem. We proposed two new network-aware placement algorithms which take into account users' locations in the network and connectivity at the Autonomous System level to determine good sites for servers. Extensive experiments with realistic network models showed that these new algorithms significantly outperform existing approaches that require full knowledge of network connectivity at the router-level topologies.

  • 45. Ta, Duong Nguyen Binh
    et al.
    Nguyen, Thang
    Zhou, Suiping
    Tang, Xueyan
    Cai, Wentong
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Interactivity-Constrained Server Provisioning in Large-Scale Distributed Virtual Environments2012In: IEEE Transactions on Parallel and Distributed Systems, ISSN 1045-9219, E-ISSN 1558-2183, Vol. 23, no 2, p. 304-312Article in journal (Refereed)
    Abstract [en]

    Maintaining interactivity is one of the key challenges in distributed virtual environments (DVEs). In this paper, we consider a new problem, termed the interactivity-constrained server provisioning problem, whose goal is to minimize the number of distributed servers needed to achieve a prespecified level of interactivity. We identify and formulate two variants of this new problem and show that they are both NP-hard via reductions to the set covering problem. We then propose several computationally efficient approximation algorithms for solving the problem. The main algorithms exploit dependencies among distributed servers to make provisioning decisions. We conduct extensive experiments to evaluate the performance of the proposed algorithms. Specifically, we use both static Internet latency data available from prior measurements and topology generators, as well as the most recent, dynamic latency data collected via our own large-scale deployment of a DVE performance monitoring system over PlanetLab. The results show that the newly proposed algorithms that take into account interserver dependencies significantly outperform the well-established set covering algorithm for both problem variants.

  • 46. Ta, Duong Nguyen Binh
    et al.
    Zhou, Suiping
    Cai, Wentong
    Tang, Xueyan
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Multi-objective zone mapping in large-scale distributed virtual environments2011In: Journal of Network and Computer Applications, ISSN 1084-8045, E-ISSN 1095-8592, Vol. 34, no 2, p. 551-561Article in journal (Refereed)
    Abstract [en]

    In large-scale distributed virtual environments (DVEs), the NP-hard zone mapping problem concerns how to assign distinct zones of the virtual world to a number of distributed servers to improve overall interactivity. Previously, this problem has been formulated as a single-objective optimization problem, in which the objective is to minimize the total number of clients that are without QoS. This approach may cause considerable network traffic and processing overhead, as a large number of zones may need to be migrated across servers. In this paper, we introduce a multi-objective approach to the zone mapping problem, in which both the total number of clients without QoS and the migration overhead are considered. To this end, we have proposed several new algorithms based on meta-heuristics such as local search and multi-objective evolutionary optimization techniques. Extensive simulation studies have been conducted with realistic network latency data modeled after actual Internet measurements, and different workload distribution settings. Simulation results demonstrate the effectiveness of the newly proposed algorithms.

  • 47. Tan, G.
    et al.
    Persson, Anders
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    HLA federate migration2005In: 38th Annual Simulation Symposium, Proceedings, IEEE Computer Society, 2005, p. 243-250Conference paper (Refereed)
    Abstract [en]

    The High Level Architecture (HLA) is a standardized framework for distributed simulation that promotes reuse and interoperability of simulation components (federates). Federates are processes which communicate with each other in the simulation via the Run Time Infrastructure (RTI). When running a large scale simulation over many nodes/workstations, some may get more workload than others. To run the simulation as efficiently as possible, the workload should be uniformly distributed over the nodes. Current RTI implementations are very static, and do not allow any load balancing. Load balancing of a HLA federation can be achieved by scheduling new federates on the node with least load and migrating executing federates from a highly loaded node to a lightly loaded node. Process migration has been a topic of research for many years, but not within the context of HLA. This paper focuses on process migration within the HLA framework.

  • 48. Taylor, S. J. E.
    et al.
    Turner, S. J.
    Mustafee, N.
    Ahlander, H.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    A comparison of CMB- and HLA-based approaches to type I interoperability reference model problems for COTS-based distributed simulation2005In: Simulation (San Diego, Calif.), ISSN 0037-5497, E-ISSN 1741-3133, Vol. 81, no 1, p. 33-43Article in journal (Refereed)
    Abstract [en]

    Commercial off-the-shelf (COTS) simulation packages (CSPs) are software used by many simulation modelers to build and experiment with models of various systems in domains such as manufacturing, health, logistics, and commerce. As part of an ongoing standardization effort, this article introduces the COTS Simulation Package Emulator (CSPE), a proposed benchmark that can be used to investigate asynchronous entity-passing problems as described by the Type I interoperability reference model for COTS-based distributed simulation. To demonstrate its use, two approaches to this form of interoperability are discussed: an implementation based on the Chandy-Misra-Bryant (CMB) conservative algorithm and an implementation based on the High Level Architecture (HLA) Time Advance Request (TAR). It is shown the HLA approach outperforms the CMB approach in almost all cases. The article concludes that the CSPE benchmark is a valid basis from which the most efficient approach to Type I interoperability problems for COTS-based distributed simulation can be discovered.

  • 49. Teo, Y. M.
    et al.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Microelectronics and Information Technology, IMIT.
    Comparison of load balancing strategies on cluster-based web servers2001In: Simulation (San Diego, Calif.), ISSN 0037-5497, E-ISSN 1741-3133, Vol. 77, no 05-6, p. 185-195Article in journal (Refereed)
    Abstract [en]

    This paper focuses on an experimental analysis of the performance and scalability of cluster-based web servers. We carry out the comparative studies using two experimental platforms, namely, a hardware testbed consisting of sixteen PCs, and a trace-driven discrete-event simulator. Dispatcher and web server service times used in the simulator are determined by carrying out a set of experiments on the testbed. The simulator is validated against stochastic queuing models and the testbed. Experiments on the testbed are limited by the hardware configuration, but our complementary approach allows us to carry out scalability studies on the validated simulator. The three dispatcher-based scheduling algorithms analyzed are: round robin scheduling, least connected based scheduling, and least loaded based scheduling. The least loaded algorithm is used as the baseline (upper performance bound) in our analysis and the performance metrics include average waiting tune, average response time, and average web server utilization. A synthetic trace generated by the workload generator called SURGE, and a public-domain France Football World Cup 1998 trace are used. We observe that the round robin algorithm performs much worse in comparison with the other two algorithms for low to medium workload. However, as the request arrival rate increases, the performance of the three algorithms converge with the least connected algorithm approaching the baseline algorithm at a match faster rate than the round robin. The least connected algorithm performs well for medium to high workload. At very low load, the average waiting time is two to six times higher than the baseline algorithm but the absolute value between these two waiting times is very small.

  • 50.
    Ulriksson, Jenny
    et al.
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Ayani, Rassul
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Consistency overhead using HLA for collaborative work2005In: Ninth IEEE International Symposium on Distributed Simulation and Real-Time Applications, Proceedings, LOS ALAMITOS: IEEE COMPUTER SOC , 2005, p. 7-15Conference paper (Refereed)
    Abstract [en]

    CSCW (Computer Supported Cooperative Work) has been around for many years. However, despite growing use of CSCW, general infrastructures that support it are few, and seldom provide data consistency management. A question at issue is how to easily provide different consistency policies for CSCW applications. In a modeling and simulation project at the Swedish Defense Research Agency we have evaluated technologies for developing a CSCW infrastructure. A frequently used distributed simulation architecture the HLA, appeared as a candidate for beneficially providing CSCW services. Hence we have investigated the use of HLA for the purpose, with successful result. This paper presents some of the outcome, and the experiences from adapting an application to CSCW utilizing HLA. It presents performance experiments for evaluation of three consistency policies for CSCW using HLA, conclusions, future work and some recommendations.

12 1 - 50 of 60
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf