Change search
Refine search result
1234 51 - 100 of 151
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
• 51.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. Ericsson Research, Kista, Sweden.. Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139 USA.. ABB AB, Corporate Research, 721 78 Västerås, Sweden.. Department of Information Engineering, University of Padova, 35131 Padua, Italy.. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Low-Latency Networking: Where Latency Lurks and How to Tame It2018In: Proceedings of the IEEE, ISSN 0018-9219, E-ISSN 1558-2256, p. 1-27Article in journal (Refereed)

While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission-critical communication services requiring latency in the order of milliseconds or submilliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross-layer interactions are complex due to the multiple design principles and technologies that contribute to the layers' design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of submilliseconds latency. In this paper, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent tradeoffs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research.

• 52.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
DoS-resilient cooperative beacon verification for vehicular communication systems2019In: Ad hoc networks, ISSN 1570-8705, E-ISSN 1570-8713, Vol. 90, article id UNSP 101775Article in journal (Refereed)

Authenticated safety beacons in Vehicular Communication (VC) systems ensure awareness among neighboring vehicles. However, the verification of beacon signatures introduces significant processing overhead for resource-constrained vehicular On-Board Units (OBUs). Even worse in dense neighborhood or when a clogging Denial of Service (DoS) attack is mounted. The OBU would fail to verify for all received (authentic or fictitious) beacons. This could significantly delay the verifications of authentic beacons or even affect the awareness of neighboring vehicle status. In this paper, we propose an efficient cooperative beacon verification scheme leveraging efficient symmetric key based authentication on top of pseudonymous authentication (based on traditional public key cryptography), providing efficient discovery of authentic beacons among a pool of received authentic and fictitious beacons, and can significantly decrease waiting times of beacons in queue before their validations. We show with simulation results that our scheme can guarantee low waiting times for received beacons even in high neighbor density situations and under DoS attacks, under which a traditional scheme would not be workable. rights reserved.

• 53.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
POSTER: Expedited Beacon Verification for VANET2018In: WISEC'18: PROCEEDINGS OF THE 11TH ACM CONFERENCE ON SECURITY & PRIVACY IN WIRELESS AND MOBILE NETWORKS, ASSOC COMPUTING MACHINERY , 2018, p. 283-284Conference paper (Refereed)

Safety beaconing is a basic, yet essential component in secure Vehicular Communication systems. Safety beacons, broadcasted periodically, provide real-time vehicle status to surrounding vehicles, which can be used to provide spatial and mobility awareness. However, secure and privacy-preserving beacons incur high computation overhead, especially when the vehicle density is high or in the presence of adversarial nodes. Here, we show through experimental evaluation how to significantly decrease beacon verification delay.

• 54.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
A Meta Language for Threat Modeling and Attack Simulations2018In: ACM International Conference Proceeding Series, 2018Conference paper (Refereed)

Attack simulations may be used to assess the cyber security of systems. In such simulations, the steps taken by an attacker in order to compromise sensitive system assets are traced, and a time estimate may be computed from the initial step to the compromise of assets of interest. Attack graphs constitute a suitable formalism for the modeling of attack steps and their dependencies, allowing the subsequent simulation. To avoid the costly proposition of building new attack graphs for each system of a given type, domain-specific attack languages may be used. These languages codify the generic attack logic of the considered domain, thus facilitating the modeling, or instantiation, of a specific system in the domain. Examples of possible cyber security domains suitable for domain-specific attack languages are generic types such as cloud systems or embedded systems but may also be highly specialized kinds, e.g. Ubuntu installations; the objects of interest as well as the attack logic will differ significantly between such domains. In this paper, we present the Meta Attack Language (MAL), which may be used to design domain-specific attack languages such as the aforementioned. The MAL provides a formalism that allows the semi-automated generation as well as the efficient computation of very large attack graphs. We declare the formal background to MAL, define its syntax and semantics, exemplify its use with a small domain-specific language and instance model, and report on the computational performance.

• 55.
KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems. SICS.
Can the Common Vulnerability Scoring System be Trusted?: A Bayesian Analysis2018In: IEEE Transactions on Dependable and Secure Computing, ISSN 1545-5971, E-ISSN 1941-0018, Vol. 15, no 6, p. 1002-1015, article id 7797152Article in journal (Refereed)

The Common Vulnerability Scoring System (CVSS) is the state-of-the art system for assessing software vulnerabilities. However, it has been criticized for lack of validity and practitioner relevance. In this paper, the credibility of the CVSS scoring data found in five leading databases – NVD, X-Force, OSVDB, CERT-VN, and Cisco – is assessed. A Bayesian method is used to infer the most probable true values underlying the imperfect assessments of the databases, thus circumventing the problem that ground truth is not known. It is concluded that with the exception of a few dimensions, the CVSS is quite trustworthy. The databases are relatively consistent, but some are better than others. The expected accuracy of each database for a given dimension can be found by marginalizing confusion matrices. By this measure, NVD is the best and OSVDB is the worst of the assessed databases.

• 56.
KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
Modeling and analyzing systems-of-systems in the Multi-Attribute Prediction Language (MAPL)2016In: Proceedings - 4th International Workshop on Software Engineering for Systems-of-Systems, SESoS 2016, Association for Computing Machinery (ACM), 2016, p. 1-7Conference paper (Refereed)

The Multi-Attribute Prediction Language (MAPL), an analysis metamodel for non-functional qualities of systems-ofsystems, is introduced. MAPL features analysis in five nonfunctional areas: service cost, service availability, data accuracy, application coupling, and application size. In addition, MAPL explicitly includes utility modeling to make tradeoffs between the qualities. The paper introduces how each of the five non-functional qualities is modeled and quantitatively analyzed based on the ArchiMate standard for enterprise architecture modeling and the previously published Predictive, Probabilistic Architecture Modeling Framework, building on the well-known UML and OCL formalisms. The main contribution of MAPL lies in combining all five nonfunctional analyses into a single unified framework.

• 57.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Decentralized Algorithms for Resource Allocation in Mobile Cloud Computing Systems2018Licentiate thesis, comprehensive summary (Other academic)

The rapid increase in the number of mobile devices has been followed by an increase in the capabilities of mobile devices, such as the computational power, memory and battery capacity. Yet, the computational resources of individual mobile devices are still insufficient for various delay sensitive and computationally intensive applications. These emerging applications could be supported by mobile cloud computing, which allows using external computational resources. Mobile cloud computing does not only improve the users’ perceived performance of mobile applications, but it also may reduce the energy consumption of mobile devices, and thus it may extend their battery life. However, the overall performance of mobile cloud computing systems is determined by the efficiency of allocating communication and computational resources. The work in this thesis proposes decentralized algorithms for allocating these two resources in mobile cloud computing systems. In the first part of the thesis, we consider the resource allocation problem in a mobile cloud computing system that allows mobile users to use cloud computational resources and the resources of each other. We consider that each mobile device aims at minimizing its perceived response time, and we develop a game theoretical model of the problem. Based on the game theoretical model, we propose an efficient decentralized algorithm that relies on average system parameters, and we show that the proposed algorithm could be a promising solution for coordinating multiple mobile devices. In the second part of the thesis, we consider the resource allocation problem in a mobile cloud computing system that consists of multiple wireless links and a cloud server. We model the problem as a strategic game, in which each mobile device aims at minimizing a combination of its response time and energy consumption for performing the computation. We prove the existence of equilibrium allocations of mobile cloud resources, and we use game theoretical tools for designing polynomial time decentralized algorithms with a bounded approximation ratio. We then consider the problem of allocating communication and computational resources over time slots, and we show that equilibrium allocations still exist. Furthermore, we analyze the structure of equilibrium allocations, and we show that the proposed decentralized algorithm for computing equilibria achieves good system performance. By providing constructive equilibrium existence proofs, the results in this thesis provide low complexity decentralized algorithms for allocating mobile cloud resources for various mobile cloud computing architectures.

• 58.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Decentralized Algorithm for Randomized Task Allocation in Fog Computing Systems2019In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 27, no 1, p. 85-97Article in journal (Refereed)

Fog computing is identified as a key enabler for using various emerging applications by battery powered and computationally constrained devices. In this paper, we consider devices that aim at improving their performance by choosing to offload their computational tasks to nearby devices or to an edge cloud. We develop a game theoretical model of the problem and use a variational inequality theory to compute an equilibrium task allocation in static mixed strategies. Based on the computed equilibrium strategy, we develop a decentralized algorithm for allocating the computational tasks among nearby devices and the edge cloud. We use the extensive simulations to provide insight into the performance of the proposed algorithm and compare its performance with the performance of a myopic best response algorithm that requires global knowledge of the system state. Despite the fact that the proposed algorithm relies on average system parameters only, our results show that it provides a good system performance close to that of the myopic best response algorithm.

• 59.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
A Game Theoretic Analysis of Selfish Mobile Computation Offloading2017In: IEEE INFOCOM 2017 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, IEEE , 2017Conference paper (Refereed)

Offloading computation to a mobile cloud is a promising approach for enabling the use of computationally intensive applications by mobile devices. In this paper we consider autonomous devices that maximize their own performance by choosing one of many wireless access points for computation offloading. We develop a game theoretic model of the problem, prove the existence of pure strategy Nash equilibria, and provide a polynomial time algorithm for computing an equilibrium. For the case when the cloud computing resources scale with the number of mobile devices we show that all improvement paths are finite. We provide a bound on the price of anarchy of the game, thus our algorithm serves as an approximation algorithm for the global computation offloading cost minimization problem. We use extensive simulations to provide insight into the performance and the convergence time of the algorithms in various scenarios. Our results show that the equilibrium cost may be close to optimal, and the convergence time is almost linear in the number of mobile devices.

• 60.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Decentralized Algorithm for Randomized Task Allocation in Fog Computing SystemsManuscript (preprint) (Other academic)

Fog computing is identified as a key enablerfor using various emerging applications by battery poweredand computationally constrained devices. In this paper, weconsider devices that aim at improving their performanceby choosing to offload their computational tasks to nearbydevices or to a cloud server. We develop a game theoreticalmodel of the problem, and we use variational inequalitytheory to compute an equilibrium task allocation in staticmixed strategies. Based on the computed equilibrium strategy,we develop a decentralized algorithm for allocating thecomputational tasks among nearby devices and the cloudserver. We use extensive simulations to provide insight intothe performance of the proposed algorithm, and we compareits performance with the performance of a myopic bestresponse algorithm that requires global knowledge of thesystem state. Despite the fact that the proposed algorithmrelies on average system parameters only, our results showthat it provides good system performance close to that of themyopic best response algorithm.

• 61.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
Decentralized Scheduling for Offloading of Periodic Tasks in Mobile Edge Computing2018In: IFIP NETWORKING 2018, IEEE conference proceedings, 2018, p. 469-477Conference paper (Refereed)

Motivated by various surveillance applications, we consider wireless devices that periodically generate computationally intensive tasks. The devices aim at maximizing their performance by choosing when to perform the computations and whether or not to offload their computations to a cloud resource via one of multiple wireless access points. We propose a game theoretic model of the problem, give insight into the structure of equilibrium allocations and provide an efficient algorithm for computing pure strategy Nash equilibria. Extensive simulation results show that the performance in equilibrium is significantly better than in a system without coordination of the timing of the tasks’ execution, and the proposed algorithm has an average computational complexity that is linear in the number of devices.

• 62.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Joint Allocation of Computing and Wireless Resources to Autonomous Devices in Mobile Edge Computing2018In: MECOMM 2018 - Proceedings of the 2018 Workshop on Mobile Edge Communications, Part of SIGCOMM 2018, 2018Conference paper (Refereed)

We consider the interaction between mobile edge computing (MEC) resource management and wireless devicesthat offload computationally intensive tasks through shared wireless links to edge cloud servers, so as to minimize their completion times. We model the interaction between the devices and the operator that optimizes the allocation of the wireless and computing resources as a Stackelberg game. We show that a pure strategy Stackelberg equilibrium exists, and we provide an efficient algorithm for computing equilibrium allocations. Our simulation results show that jointoptimization of the wireless and computing resources can provide a significant reduction of completion times at little increase in computational complexity compared to a system where resource allocation is not optimized. © 2018 Association for Computing Machinery.

• 63.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Poster Abstract: Decentralized Fog Computing Resource Management for Offloading of Periodic Tasks2018In: IEEE INFOCOM 2018 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), IEEE , 2018Conference paper (Refereed)

Fog computing is recognized as a promising approach for meeting the computational and delay requirements of a variety of emerging applications in the Internet of Things. This work presents a game theoretical treatment of the resource allocation problem in a fog computing system where wireless devices periodically generate computationally intensive tasks, and aim at minimizing their own cost.

• 64.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. Royal Inst Technol, KTH, Sch Elect Engn & Comp Sci, Dept Network & Syst Engn, Stockholm, Sweden..
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Wireless and Computing Resource Allocation for Selfish Computation Offloading in Edge Computing2019In: IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2019), IEEE , 2019, p. 2467-2475Conference paper (Refereed)

We consider the problem of allocating wireless and computing resources to a set of autonomous wireless devices in an edge computing system. Devices in the system can decide whether or not to use edge computing resources for offloading computing tasks so as to minimize their completion time, while the edge cloud operator can allocate wireless and computing resources to the devices. We model the interaction between devices and the operator as a Stackelberg game, prove the existence of Stackelberg equilibria, and propose an efficient decentralized algorithm for computing equilibria. We provide a bound on the price of anarchy of the game, which also serves as an approximation ratio bound for the proposed algorithm. Our simulation results show that the joint allocation of wireless and computing resources by the operator can halve the completion times compared to a system with static resource allocation. At the same time, the convergence time of the proposed algorithm is approximately linear in the number of devices, and thus it could be effectively implemented for edge computing resource management.

• 65.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. Ericsson AB.
KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control. Ericsson AB. KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering. Ericsson AB. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Low-Complexity OFDM Spectral Precoding2019In: 20th IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) 2019, 2019, article id 8815554Conference paper (Refereed)

This paper proposes a new large-scale mask compliant spectral precoder (LS-MSP) for orthogonal frequency division multiplexing systems. In this paper, we first consider a previously proposed mask-compliant spectral precoding scheme that utilizes a generic convex optimization solver which suffers from high computational complexity, notably in large-scale systems. To mitigate the complexity of computing the LS-MSP, we propose a divide-and-conquer approach that breaks the original problem into smaller rank 1 quadratic-constraint problems and each small problem yields closed-form solution. Based on these solutions, we develop three specialized first-order low-complexity algorithms, based on 1) projection on convex sets and 2) the alternating direction method of multipliers. We also develop an algorithm that capitalizes on the closed-form solutions for the rank 1 quadratic constraints, which is referred to as 3) semianalytical spectral precoding. Numerical results show that the proposed LS-MSP techniques outperform previously proposed techniques in terms of the computational burden while complying with the spectrum mask. The results also indicate that 3) typically needs 3 iterations to achieve similar results as 1) and 2) at the expense of a slightly increased computational complexity.

• 66.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Probabilistic Modeling and Simulation of Vehicular Cyber Attacks: An Application of the Meta Attack Language2019In: ICISSP 2019 - Proceedings of the 5th International Conference on Information Systems Security and Privacy, SciTePress, 2019, p. 175-182Conference paper (Refereed)

Attack simulations are a feasible means to assess the cyber security of systems. The simulations trace the steps taken by an attacker to compromise sensitive system assets. Moreover, they allow to estimate the time conducted by the intruder from the initial step to the compromise of assets of interest. One commonly accepted approach for such simulations are attack graphs, which model the attack steps and their dependencies in a formal way. To reduce the effort of creating new attack graphs for each system of a given type, domain-specific attack languages may be employed. They codify common attack logics of the considered domain. Consequently, they ease the reuse of models and, thus, facilitate the modeling of a specific system in the domain. Previously, MAL (the Meta Attack Language) was proposed, which serves as a framework to develop domain specific attack languages. In this article, we present vehicleLang, a Domain Specific Language (DSL) which can be used to model vehicles with respect to their IT infrastructure and to analyze their weaknesses related to known attacks. To model domain specifics in our language, we rely on existing literature and verify the language using an interview with a domain expert from the automotive industry. To evaluate our results, we perform a Systematic Literature Review (SLR) to identify possible attacks against vehicles. Those attacks serve as a blueprint for test cases checked against the vehicleLang specification.

• 67.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Privacy Preservation through Uniformity2018In: Proceedings of the ACM Conference on Security and Privacy in Wireless & Mobile Networks (WiSec), Stockholm, Sweden, June 2018., ACM Digital Library, 2018Conference paper (Refereed)

Inter-vehicle communications disclose rich information about vehicle whereabouts. Pseudonymous authentication secures communication while enhancing user privacy thanks to a set of anonymized certificates, termed pseudonyms. Vehicles switch the pseudonyms (and the corresponding private key) frequently; we term this pseudonym transition process. However, exactly because vehicles can in principle change their pseudonyms asynchronously, an adversary that eavesdrops (pseudonymously) signed messages, could link pseudonyms based on the times of pseudonym transition processes. In this poster, we show how one can link pseudonyms of a given vehicle by simply looking at the timing information of pseudonym transition processes. We also propose "mix-zone everywhere": time-aligned pseudonyms are issued for all vehicles to facilitate synchronous pseudonym update; as a result, all vehicles update their pseudonyms simultaneously, thus achieving higher user privacy protection.

• 68.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
A Cooperative Location Privacy Protection Scheme for Vehicular Ad-hoc Networks2019Report (Other academic)
• 69.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Efficient, Scalable, and Resilient Vehicle-Centric Certificate Revocation List Distribution in VANETs2018In: Proceedings of the ACM Conference on Security and Privacy in Wireless & Mobile Networks (WiSec), Stockholm, Sweden, June 2018., 2018Conference paper (Refereed)

In spite of progress in securing Vehicular Communication (VC) systems, there is no consensus on how to distribute Certificate Revocation Lists (CRLs). The main challenges lie exactly in (i) crafting an efficient and timely distribution of CRLs for numerous anonymous credentials, pseudonyms, (ii) maintaining strong privacy for vehicles prior to revocation events, even with honest-but-curious system entities, (iii) and catering to computation and communication constraints of on-board units with intermittent connectivity to the infrastructure. Relying on peers to distribute the CRLs is a double-edged sword: abusive peers could ‘‘pollute’’ the process, thus degrading the timely CRLs distribution. In this paper, we propose a vehicle-centric solution that addresses all these challenges and thus closes a gap in the literature. Our scheme radically reduces CRL distribution overhead: each vehicle receives CRLs corresponding only to its region of operation and its actual trip duration. Moreover, a ‘‘fingerprint’’ of CRL ‘pieces’ is attached to a subset of (verifiable) pseudonyms for fast CRL ‘piece’ validation (while mitigating resource depletion attacks abusing the CRL distribution). Our experimental evaluation shows that our scheme is efficient, scalable, dependable, and practical: with no more than 25 KB/s of traffic load, the latest CRL can be delivered to 95% of the vehicles in a region (50×50 KM) within 15s, i.e., more than 40 times faster than the state-of-the-art. Overall, our scheme is a comprehensive solution that complements standards and can catalyze the deployment of secure and privacy-protecting VC systems.

• 70.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
Poster: Mix-Zones Everywhere: A Dynamic Cooperative Location Privacy Protection Scheme2018In: 2018 IEEE Vehicular Networking Conference, (VNC) / [ed] Altintas, O Tsai, HM Lin, K Boban, M Wang, CY Sahin, T, IEEE, 2018, article id 8628340Conference paper (Refereed)

Inter-vehicle communications disclose rich information about vehicle whereabouts. Pseudonymous authentication secures communication while enhancing user privacy. To enhance location privacy, cryptographic mix-zones are proposed where vehicles can covertly update their credentials. But, the resilience of such schemes against linking attacks highly depends on the geometry of the mix-zones, mobility patterns, vehicle density, and arrival rates. In this poster, we propose "mix-zones everywhere",a cooperative location privacy protection scheme to mitigate linking attacks during pseudonym transition. Time-aligned pseudonyms are issued for all vehicles to facilitate synchronous pseudonym updates. Our scheme thwarts Sybil-based misbehavior, strongly maintains user privacy in the presence of honest-but-curious system entities, and is resilient against misbehaving insiders.

• 71.
KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Radio Systems Laboratory (RS Lab).
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Efficient Beamforming for Mobile mmWave Networks2019Conference paper (Refereed)

We design a lightweight beam-searching algorithmfor mobile millimeter-wave systems. We construct and maintaina set of path skeletons, i.e., potential paths between a user and theserving base station to substantially expedite the beam-searchingprocess. To exploit the spatial correlations of the channels, wepropose an efficient algorithm that measures the similarity ofthe skeletons and re-executes the beam-searching procedure onlywhen the old one becomes obsolete. We identify and optimizeseveral tradeoffs between: i) the beam-searching overhead andthe instantaneous rate of the users, and ii) the number of usersand the update overhead of the path skeletons. Simulation resultsin an outdoor environment with real building map data show thatthe proposed method can significantly improve the performanceof beam-searching in terms of latency, energy consumption andachievable throughout.

• 72.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
Experimental Evaluation of Precision of a Proximity-based Indoor Positioning System2019In: 2019 15th Annual Conference on Wireless On-demand Network Systems and Services, WONS 2019 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 130-137, article id 8795488Conference paper (Refereed)

Bluetooth Low Energy beacons are small transmitters with long battery life that are considered for providing proximity-based services. In this work we evaluate experimentally the performance of a proximity-based indoor positioning system built with off-the-shelf beacons in a realistic environment. We demonstrate that the performance of the system depends on a number of factors, such as the distance between the beacon and the mobile device, the positioning of the beacon as well as the presence and positioning of obstacles such as human bodies. We further propose an online algorithm based on moving average forecasting and evaluate the algorithm in the presence of human mobility. We conclude that algorithms for proximity-based indoor positioning must be evaluated in realistic scenarios, for instance considering people and traffic on the used radio bands. The uncertainty in positioning is high in our experiments and hence the success of commercial context-aware solutions based on BLE beacons is highly dependent on the accuracy required by each application.

• 73.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Ericsson. Ericsson.
Product Feature Prioritization using the Hidden Structure Method: A Practical Case at Ericsson2016Conference paper (Refereed)

In this paper, we present a case were we employ the Hidden Structure method to product feature prioritization at Ericsson. The method extends the more common Design Structure Matrix (DSM) approach that has been used in technology management (e.g. project management and systems engineering) for quite some time in order to model complex systems and processes. The hidden structure method focuses on analyzing a DSM based on coupling and modularity theory, and it has been used in a number of software architecture and software portfolio cases. In previous work by the authors the method was tested on organization transformation at Ericsson, however this is the first time it has been employed in the domain of product feature prioritization. Today, at Ericsson, features are prioritized based on a business case approach where each feature is handled isolated from other features and the main focus is customer or market-based requirements. By employing the hidden structure method we show that features are heavily dependent on each other in a complex network, thus they should not be treated as isolated islands. These dependencies need to be considered when prioritizing features in order to save time and money, as well as increase end customer satisfaction.

• 74.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Visualizing and Measuring Enterprise Application Architecture: An Exploratory Telecom Case2014In: 2014 47th Hawaii International Conference on System Sciences, HICSS, IEEE Computer Society, 2014, p. 3847-3856Conference paper (Refereed)

We test a method for visualizing and measuring enterprise application architectures. The method was designed and previously used to reveal the hidden internal architectural structure of software applications. The focus of this paper is to test if it can also uncover new facts about the applications and their relationships in an enterprise architecture, i.e., if the method can reveal the hidden external structure between software applications. Our test uses data from a large international telecom company. In total, we analyzed 103 applications and 243 dependencies. Results show that the enterprise application structure can be classified as a core-periphery architecture with a propagation cost of 25%, core size of 34%, and architecture flow through of 64%. These findings suggest that the method could be effective in uncovering the hidden structure of an enterprise application architecture.

• 75.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Visualizing and Measuring Software Portfolio Architecture: A Flexibility Analysis2014In: 16th International Dependency and Structure Modelling Conference, DSM 2014, 2014, p. 65-74Conference paper (Refereed)

In this paper, we test a Design Structure Matrix (DSM) based method for visualizing and measuring software portfolio architectures, and use our measures to predict the costs of architectural change. Our data is drawn from a biopharmaceutical company, comprising 407 architectural components with 1,157 dependencies between them. We show that the architecture of this system can be classified as a "core-periphery" system, meaning it contains a single large dominant cluster of interconnected components (the "Core") representing 32% of the system. We find that the classification of software applications within this architecture, as being either Core or Peripheral, is a significant predictor of the costs of architectural change. In regression tests, we show that this measure has greater predictive power than prior measures of coupling used in the literature.

• 76.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
Extending a General Theory of Software to Engineering2014In: Proceedings of the 3rd SEMAT Workshop on General Theories of Software Engineering, 2014, p. 36-39Conference paper (Refereed)

In this paper, we briefly describe a general theory of software used in order to model and predict the current and future quality of software systems and their environment. The general theory is described using a class model containing classes such as application component, business service, and infrastructure function as well as attributes such as modifiability, cost, and availability. We also elaborate how this general theory of software can be extended into a general theory of software engineering by adding engineering activities, roles, and requirements.

• 77.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
KTH, School of Electrical Engineering (EES), Network and Systems engineering. KTH, School of Electrical Engineering (EES), Network and Systems engineering.
Automatic Design of Secure Enterprise Architecture2017In: Proceedings of the 2017 IEEE 21st International Enterprise Distributed Object Computing Conference Workshops and Demonstrations (EDOCW 2017) / [ed] Halle, S Dijkman, R Lapalme, J, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 65-70Conference paper (Refereed)

Architecture models mainly have three functions; 1) document, 2) analyze, and 3) improve the system under consideration. All three functions have suffered from being time-consuming and expensive, mainly due to being manual processes in need of hard to find expertise. Recent work has however automated both the data collection and the analysis. In order for enterprise architecture modeling to finally become free of manual labor the design function also needs to be automated. In this position paper we propose the Automatic Designer. A solution that employs machine learning techniques to realize the design of (near) optimal architecture solutions. This particular implementation is focused on security analysis, but could easily be extended to other topics.

• 78.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems. KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
Search-Based Design of Large Software Systems-of-Systems2015In: Proceedings - 3rd International Workshop on Software Engineering for Systems-of-Systems, SESoS 2015, IEEE , 2015, p. 44-47Conference paper (Refereed)

This work in progress paper presents the foundation for an Automatic Designer of large software systems-of-systems. The core formalism for the Automatic Designer is UML. The Automatic Designer extends UML with a fitness function, which uses analysis of non-functional requirements, utility theory, and stakeholder requirements, as the basis for its design suggestions. This extension logic is formalized using an OCL-based Predictive, Probabilistic Architecture Modeling Framework (called P2AMF). A set of manipulation operators is used on the UML model in order to modify it. Then, from a component library (with OTS products), new components will be introduced to the design. Using operators, a search algorithm will look for an optimal solution.

• 79.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Automated Probabilistic System Architecture Analysis in the Multi-Attribute Prediction Language (MAPL): Iteratively Developed using Multiple Case Studies2017In: International Journal of Complex Systems Informatics and Modeling Quarterly (CSIMQ), Vol. June/July, no 11, p. 38-68Article in journal (Refereed)

The Multi-Attribute Prediction Language (MAPL), an analysis metamodel for non-functional qualities of system architectures, is introduced. MAPL features automate analysis in five non-functional areas: service cost, service availability, data accuracy, application coupling, and application size. In addition, MAPL explicitly includes utility modeling to make trade-offs between the qualities. The article introduces how each of the five non-functional qualities are modeled and quantitatively analyzed based on the ArchiMate standard for enterprise architecture modeling and the previously published Predictive, Probabilistic Architecture Modeling Framework, building on the well-known UML and OCL formalisms. The main contribution of MAPL lies in the probabilistic use of multi-attribute utility theory for the trade-off analysis of the non-functional properties. Additionally, MAPL proposes novel model-based analyses of several non-functional attributes. We also report how MAPL has iteratively been developed using multiple case studies.

• 80.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
KTH, School of Engineering Sciences (SCI), Mechanics.
Critical Success Factors in E-Learning for Project-Based Courses2014In: EDULEARN14: 6th International Conference on Education and New Learning Technologies / [ed] Chova, LG; Martinez, AL; Torres, IC, 2014, p. 6125-6134Conference paper (Refereed)
• 81.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. ABB Corporate Research, Forskargränd 7, 722 26 Västerås, Sweden .
ABB Corporate Research, Forskargränd 7, 722 26 Västerås, Sweden . ABB Corporate Research, Forskargränd 7, 722 26 Västerås, Sweden .
Increasing software development efficiency and maintainability for complex industrial systems - A case study2013In: Journal of Software Maintenance and Evolution: Research and Practice, ISSN 1532-060X, E-ISSN 1532-0618, Vol. 25, no 3, p. 285-301Article in journal (Refereed)

It is difficult to manage complex software systems. Thus, many research initiatives focus on how to improve software development efficiency and maintainability. However, the trend in the industry is still alarming, software development projects fail, and maintenance is becoming more and more expensive. One problem could be that research has been focusing on the wrong things. Most research publications address either process improvements or architectural improvements. There are few known approaches that consider how architectural changes affect processes and vice versa. One method proposed, called the BusinessArchitectureProcess method, takes these aspects into consideration. In 2007 the method was tested in one case study. Findings in the 2007 case study show that the method is useful, but in need of improvements and further validation. The present paper employs the method in a second case study. The contribution in this paper is thus a second test and validation of the proposed method, and useful method improvements for future use of the method.

• 82.
Univ Oslo, Dept Informat, N-0316 Oslo, Norway..
ABB Corp Res, S-72226 Vasteras, Sweden.. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. Linkoping Univ, Dept Sci & Technol, S-58183 Linkoping, Sweden.. Univ Oslo, Dept Informat, N-0316 Oslo, Norway.. Univ Oslo, Dept Informat, N-0316 Oslo, Norway..
Latency Analysis of Wireless Networks for Proximity Services in Smart Home and Building Automation: The Case of Thread2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 4856-4867Article in journal (Refereed)

Proximity service (ProSe), using the geographic location and device information by considering the proximity of mobile devices, enriches the services we use to interact with people and things around us. ProSe has been used in mobile social networks in proximity and also in smart home and building automation (Google Home). To enable ProSe in smart home, reliable and stable network protocols and communication infrastructures are needed. Thread is a new wireless protocol aiming at smart home and building automation (BA), which supports mesh networks and native Internet protocol connectivity. The latency of Thread should be carefully studied when used in user-friendly and safety-critical ProSe in smart home and BA. In this paper, a system level model of latency in the Thread mesh network is presented. The accumulated latency consists of different kinds of delay from the application layer to the physical layer. A Markov chain model is used to derive the probability distribution of the medium access control service time. The system level model is experimentally validated in a multi-hop Thread mesh network. The outcomes show that the system model results match well with the experimental results. Finally, based on an analytical model, a software tool is developed to estimate the latency of the Thread mesh network, providing developers more network information to develop user-friendly and safety-critical ProSe in smart home and BA.

• 83.
Huawei Technol, Ireland Res Ctr, Dublin D01 R3K6 1, Ireland.;Univ Cyprus, KIOS Res & Innovat Ctr Excellence, CY-1678 Nicosia, Cyprus..
Univ Minho, Algoritmi Res Ctr, P-4800058 Guimaraes, Portugal.. Hanyang Univ, Dept Elect Engn, Seoul 04763, South Korea.. Korea Aerosp Res Inst, Nav Res & Dev Div, Daejeon 34133, South Korea.. HERE, Enterprise Indoor Positioning Solut, Tampere 33100, Finland.. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. ..
A Survey of Enabling Technologies for Network Localization, Tracking, and Navigation2018In: IEEE Communications Surveys and Tutorials, ISSN 1553-877X, E-ISSN 1553-877X, Vol. 20, no 4, p. 3607-3644Article in journal (Refereed)

Location information for events, assets, and individuals, mostly focusing on two dimensions so far, has triggered a multitude of applications across different verticals, such as consumer, networking, industrial, health care, public safety, and emergency response use cases. To fully exploit the potential of location awareness and enable new advanced location-based services, localization algorithms need to be combined with complementary technologies including accurate height estimation, i.e., three dimensional location, reliable user mobility classification, and efficient indoor mapping solutions. This survey provides a comprehensive review of such enabling technologies. In particular, we present cellular localization systems including recent results on 5G localization, and solutions based on wireless local area networks, highlighting those that are capable of computing 3D location in multi-floor indoor environments. We overview range-free localization schemes, which have been traditionally explored in wireless sensor networks and are nowadays gaining attention for several envisioned Internet of Things applications. We also present user mobility estimation techniques, particularly those applicable in cellular networks, that can improve localization and tracking accuracy. Regarding the mapping of physical space inside buildings for aiding tracking and navigation applications, we study recent advances and focus on smartphone-based indoor simultaneous localization and mapping approaches. The survey concludes with service availability and system scalability considerations, as well as security and privacy concerns in location architectures, discusses the technology roadmap, and identifies future research directions.

• 84. Lee, Giwon
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Fog-Assisted Aggregated Synchronization Scheme for Mobile Cloud Storage Applications2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 56852-56863Article in journal (Refereed)

Cloud storage applications, such as Dropbox and Google Drive, have recently become very popular among mobile users. In these applications, a cloud server is responsible for synchronizing updates to files among mobile users, and thus if files are shared by many mobile users and are frequently updated then the resulting synchronization traffic can be significant. In order to reduce the synchronization traffic with providing acceptable access latency, we propose a fog-assisted aggregated synchronization (FAS) scheme in which the fog computing server and the cloud server conduct localized and aggregated synchronizations, respectively. We develop an analytical model of the FAS scheme based on renewal-reward theory and use it for model-based adjustment of the timer that controls the trade-off between access latency and synchronization traffic. We use analytical and simulation results to give insight into the effects of the timer, the update-to-access ratio, the number of mobile users, and the sensitivity to the arrival process. The analytical and simulation results demonstrate that the FAS scheme can reduce the synchronization traffic significantly with acceptable access latency compared to conventional schemes.

• 85. Lei, L.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering. University of Luxembourg, Luxembourg City, 1855, Luxembourg.
Learning-Assisted Optimization for Energy-Efficient Scheduling in Deadline-Aware NOMA Systems2019In: IEEE Transactions on Green Communications and Networking, ISSN 2473-2400, Vol. 3, no 3, p. 615-627, article id 8657758Article in journal (Refereed)

In this paper, we study a class of minimum-energy scheduling problems in non-orthogonal multiple access (NOMA) systems. NOMA is adopted to enable efficient channel utilization and interference mitigation, such that base stations can consume minimal energy to empty their queued data in presence of transmission deadlines, and each user can obtain all the requested data timely. Due to the high computational complexity in resource scheduling and the stringent execution-time constraints in practical systems, providing a time-efficient and high-quality solution to 5G real-time systems is challenging. The conventional iterative optimization approaches may exhibit their limitations in supporting online optimization. We herein explore a viable alternative and develop a learning-assisted optimization framework to improve the computational efficiency while retaining competitive energy-saving performance. The idea is to use deep-learning-based predictions to accelerate the optimization process in conventional optimization methods for tackling the NOMA resource scheduling problems. In numerical studies, the proposed optimization framework demonstrates high computational efficiency. Its computational time is insensitive to the input size. The framework is able to provide optimal solutions as long as the learning-based predictions satisfy a derived optimality condition. For the general cases with imperfect predictions, the algorithmic solution is error-tolerable and performance scaleable, leading the energy-saving performance close to the global optimum.

• 86.
KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
Will Scale-Free Popularity Develop Scale-Free Geo-Social Networks?2019In: IEEE Transactions on Network Science and Engineering, Vol. 6, no 3, p. 587-598Article in journal (Refereed)

Empirical results show that spatial factors such as distance, population density and communication range affect our social activities, also reflected by the development of ties in social networks. This motivates the need for social network models that take these spatial factors into account. Therefore, in this paper we propose a gravity-low-based geo-social network model, where connections develop according to the popularity of the individuals, but are constrained through their geographic distance and the surrounding population density. Specifically, we consider a power-law distributed popularity, and random node positions governed by a Poisson point process. We evaluate the characteristics of the emerging networks, considering the degree distribution, the average degree of neighbors and the local clustering coefficient. These local metrics reflect the robustness of the network, the information dissemination speed and the communication locality. We show that unless the communication range is strictly limited, the emerging networks are scale-free, with a rank exponent affected by the spatial factors. Even the average neighbor degree and the local clustering coefficient show tendencies known in non-geographic scale-free networks, at least when considering individuals with low popularity. At high-popularity values, however, the spatial constraints lead to popularity-independent average neighbor degrees and clustering coefficients.

• 87.
ABB Corp Res, S-72226 Vasteras, Sweden.. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. Univ Oslo, Dept Informat, Fog Comp, N-0315 Oslo, Norway.. Linkoping Univ, S-58183 Linkoping, Sweden.;Linkoping Univ, Commun Elect, S-58183 Linkoping, Sweden..
A Taxonomy for the Security Assessment of IP-Based Building Automation Systems: The Case of Thread2018In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 14, no 9, p. 4113-4123Article in journal (Refereed)

Motivated by theproliferation of wireless building automation systems (BAS) and increasing security-awareness among BAS operators, in this paper, we propose a taxonomy for the security assessment of BASs. We apply the proposed taxonomy to Thread, an emerging native IP-based protocol for BAS. Our analysis reveals a number of potential weaknesses in the design of Thread. We propose potential solutions for mitigating several identified weaknesses and discuss their efficacy. We also provide suggestions for improvements in future versions of the standard. Overall, our analysis shows that Thread has a well-designed security control for the targeted use case, making it a promising candidate for communication in next generation BASs.

• 88. Luvisotto, M.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Physical Layer Design of High-Performance Wireless Transmission for Critical Control Applications2017In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 13, no 6, p. 2844-2854, article id 7924385Article in journal (Refereed)

The next generations of industrial control systems will require high-performance wireless networks (named WirelessHP) able to provide extremely low latency, ultrahigh reliability, and high data rates. The current strategy toward the realization of industrial wireless networks relies on adopting the bottom layers of general purpose wireless standards and customizing only the upper layers. In this paper, a new bottom-up approach is proposed through the realization of a WirelessHP physical layer specifically targeted at reducing the communication latency through the minimization of packet transmission time. Theoretical analysis shows that the proposed design allows a substantial reduction in packet transmission time, down to 1 $\mu$ s, with respect to the general purpose IEEE 802.11 physical layer. The design is validated by an experimental demonstrator, which shows that reliable communications up to 20 m range can be established with the proposed physical layer.

• 89. Magnusson, S.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering.
Optimal voltage control using event triggered communication2019In: e-Energy 2019 - Proceedings of the 10th ACM International Conference on Future Energy Systems, Association for Computing Machinery (ACM), 2019, p. 343-354Conference paper (Refereed)

The integration of volatile renewable energy into distribution networks on a large scale will demand advanced voltage control algorithms. Communication will be an integral part of these algorithms, however, it is unclear what kind of communication protocols will be most effective for the task. Motivated by such questions, this paper investigates how voltage control can be achieved using event triggered communications. In particular, we consider online algorithms that require the network's buses to communicate only when their voltage is outside a feasible operation range. We prove the convergence of these algorithms to an optimal operating point at the rate O(1/τ), assuming linearized power flows. We illustrate the performance of the algorithms on the full nonlinear AC power flow in simulations. Our results show that event-triggered protocols can significantly reduce the communication for smart grid control.

• 90.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH - Royal Institute of Technology.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Convergence of Limited Communication Gradient Methods2018In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 63, no 5, p. 1356-1371Article in journal (Refereed)

Distributed optimization increasingly plays a centralrole in economical and sustainable operation of cyber-physicalsystems. Nevertheless, the complete potential of the technologyhas not yet been fully exploited in practice due to communicationlimitations posed by the real-world infrastructures. This workinvestigates fundamental properties of distributed optimizationbased on gradient methods, where gradient information iscommunicated using limited number of bits. In particular, ageneral class of quantized gradient methods are studied wherethe gradient direction is approximated by a finite quantizationset. Sufficient and necessary conditions are provided on sucha quantization set to guarantee that the methods minimize anyconvex objective function with Lipschitz continuous gradient anda nonempty and bounded set of optimizers. A lower bound on thecardinality of the quantization set is provided, along with specificexamples of minimal quantizations. Convergence rate results areestablished that connect the fineness of the quantization andthe number of iterations needed to reach a predefined solutionaccuracy. Generalizations of the results to a relevant class ofconstrained problems using projections are considered. Finally,the results are illustrated by simulations of practical systems.

• 91.
Harvard Univ, Sch Engn & Appl Sci, Cambridge, MA 02138 USA..
Harvard Univ, Sch Engn & Appl Sci, Cambridge, MA 02138 USA.. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. Harvard Univ, Sch Engn & Appl Sci, Cambridge, MA 02138 USA..
Voltage Control Using Limited Communication2019In: IEEE Transactions on Big Data, ISSN 2325-5870, E-ISSN 2168-6750, Vol. 6, no 3, p. 993-1003Article in journal (Refereed)

In electricity distribution networks, the increasing penetration of renewable energy generation necessitates faster and more sophisticated voltage controls. Unfortunately, recent research shows that local voltage control fails in achieving the desired regulation, unless there is communication between the controllers. However, the communication infrastructure for distribution systems is less reliable and less ubiquitous compared to that for the bulk transmission system. In this paper, we design distributed voltage control that uses limited communication. That is, only neighboring buses need to communicate a few bits between each other for each control step. We investigate how these controllers can achieve the desired asymptotic behavior of the voltage regulation and we provide upper bounds on the number of bits that are needed to ensure a predefined accuracy of the regulation. Finally, we illustrate the results by numerical simulations.

• 92.
KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.
Bayesian model selection for change point detection and clustering2018In: 35th International Conference on Machine Learning, ICML 2018, International Machine Learning Society (IMLS) , 2018, p. 5497-5520Conference paper (Refereed)

We address a generalization of change point detection with the purpose of detecting the change locations and the levels of clusters of a piece- wise constant signal. Our approach is to model it as a nonparametric penalized least square model selection on a family of models indexed over the collection of partitions of the design points and propose a computationally efficient algorithm to approximately solve it. Statistically, minimizing such a penalized criterion yields an approximation to the maximum a-posteriori probability (MAP) estimator. The criterion is then ana-lyzed and an oracle inequality is derived using a Gaussian concentration inequality. The oracle inequality is used to derive on one hand conditions for consistency and on the other hand an adaptive upper bound on the expected square risk of the estimator, which statistically motivates our approximation. Finally, we apply our algorithm to simulated data to experimentally validate the statistical guarantees and illustrate its behavior.

• 93.
Ericsson Res, Stockholm, Sweden..
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. Swedish Inst Comp Sci RISE SICS, Stockholm, Sweden.. Ericsson Res, Stockholm, Sweden..
Performance Prediction in Dynamic Clouds using Transfer Learning2019In: 2019 IFIP/IEEE Symposium on Integrated Network and Service Management, IM 2019, IEEE, 2019, p. 242-250, article id 8717847Conference paper (Refereed)

Learning a performance model for a cloud service is challenging since its operational environment changes during execution, which requires re-training of the model in order to maintain prediction accuracy. Training a new model from scratch generally involves extensive new measurements and often generates a data-collection overhead that negatively affects the service performance. In this paper, we investigate an approach for re-training neural-network models, which is based on transfer learning. Under this approach, a limited number of neural-network layers are re-trained while others remain unchanged. We study the accuracy of the re-trained model and the efficiency of the method with respect to the number of re-trained layers and the number of new measurements. The evaluation is performed using traces collected from a testbed that runs a Video-on-Demand service and a Key-Value Store under various load conditions. We study model re-training after changes in load pattern, infrastructure configuration, service configuration, and target metric. We find that our method significantly reduces the number of new measurements required to compute a new model after a change. The reduction exceeds an order of magnitude in most cases.

• 94.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering. KTH Royal Inst Technol, Dept Network & Syst Engn, Stockholm, Sweden..
The effect of network topology on the control traffic in distributed SDN2018In: 2018 IFIP NETWORKING CONFERENCE (IFIP NETWORKING) AND WORKSHOPS / [ed] Stiller, B, IEEE , 2018, p. 199-207Conference paper (Refereed)

Software Defined Networking (SDN) has the promise of flexible routing, traffic management and service provisioning in communication networks. To allow SDN based networks scale in size, the control architecture needs to be distributed, which in turn requires the introduction of controller to controller communication. This is needed to ensure that the distributed controllers have the same understanding about the underlaying network and can make consistent local decisions. In this paper we evaluate the volume of the emerging control traffic, considering a distributed controller architecture based on ONOS and OpenFlow. We show that the control traffic increases drastically with the number of controllers, as well as with the size of the underlaying network. We evaluate topologies forming regular and random graphs, and conclude that the type of the topology influences the traffic volume significantly, while the network density has less significant effect. We show that the control traffic is significant even if the number of controllers is selected such that the control traffic is minimized, and we argue that further optimization of ONOS is needed to trade off control traffic load and consistency in the network views.

• 95.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
VPKIaaS: A highly-available and dynamically-scalable vehicular public-key infrastructure2018In: WiSec 2018 - Proceedings of the 11th ACM Conference on Security and Privacy in Wireless and Mobile Networks, Association for Computing Machinery, Inc , 2018, p. 302-304Conference paper (Refereed)

The central building block of secure and privacy-preserving Vehicular Communication (VC) systems is a Vehicular Public-Key Infrastructure (VPKI), which provides vehicles with multiple anonymized credentials, termed pseudonyms. These pseudonyms are used to ensure message authenticity and integrity while preserving vehicle (and thus passenger) privacy. In the light of emerging large-scale multi-domain VC environments, the efficiency of the VPKI and, more broadly, its scalability are paramount. In this extended abstract, we leverage the state-of-the-art VPKI system and enhance its functionality towards a highly-available and dynamically-scalable design; this ensures that the system remains operational in the presence of benign failures or any resource depletion attack, and that it dynamically scales out, or possibly scales in, according to the requests' arrival rate. Our full-blown implementation on the Google Cloud Platform shows that deploying a VPKI for a large-scale scenario can be cost-effective, while efficiently issuing pseudonyms for the requesters.

• 96.
KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
VPKIaaS: Towards Scaling Pseudonymous Authentication for Large Mobile Systems2019Report (Other academic)
• 97. Nurcan, S.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
Message from the EDOC 2018 program chairs2018In: 22nd IEEE International Enterprise Distributed Object Computing Conference, EDOC 2018, article id 8536137Article in journal (Refereed)
• 98.
KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH. KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Learning-based Pilot Precoding and Combining for Wideband Millimeter-wave Networks2017In: 2017 IEEE 7TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), IEEE , 2017Conference paper (Refereed)

This paper proposes an efficient channel estimation scheme with a minimum number of pilots for a frequency-selective millimeter-wave communication system. We model the dynamics of the channel's second-order statistics by a Markov process and develop a learning framework that finds the optimal precoding and combining vectors for pilot signals, given the channel dynamics. Using these vectors, the transmitter and receiver will sequentially estimate the corresponding angles of departure and arrival, and then refine the pilot precoding and combining vectors to minimize the error of estimating the small-scale fading of all subcarriers. Numerical results demonstrate near-optimality of our approach, compared to the oracle wherein the second-order statistics (not the dynamics) are perfectly known a priori.

• 99.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Performance Analysis of Opportunistic Content Distribution via Data-Driven Mobility Modeling2018Doctoral thesis, comprehensive summary (Other academic)

An opportunistic network is formed by co-located mobile users in order to exchange data via direct wireless links when their devices are within transmission range, without relying on the use of fixed network infrastructure. In this thesis we investigate the capabilities of opportunistic networks and cover two main areas: data-driven modeling of user mobility and analytic performance evaluation of location-aware opportunistic content distribution.

The first part of the thesis focuses on mobility modeling. We collect a dataset of user associations in the wireless network of the KTH Royal Institute of Technology, and characterize the mobility of users in this dataset both from the network and from the user perspective. From the network perspective, we model the aggregate mobility and access patterns to different parts of the network. To characterize individual mobility, we assess how mobile the users are, and how accurately their movements can be predicted in the near future. Based on these findings, and on the analysis of several other mobility traces, we propose a mobility model for populations with churn, that is specifically tailored for the evaluation of opportunistic content distribution. In the second part of the thesis, we evaluate the performance of opportunistic content distribution in ephemeral, location-aware networks where content is stored only on the user devices within the locale of interest. We develop a framework that allows modeling of the spread of information as a stochastic process and accurate capturing of the stochastic fluctuations in the number of distributed content items. We study the feasibility of opportunistic content distribution and, by means of stochastic stability analysis, assess how the system parameters can be engineered to ensure content persistence. We show that the content persistence strongly depends on the density of users, and that the requirements for user resources are relatively low already for moderate densities.

• 100.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. Tech Univ Munich, Dept Informat, Munich, Germany.
KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
Ensuring Persistent Content in Opportunistic Networks via Stochastic Stability Analysis2018In: ACM Transactions on Modeling and Performance Evaluation of Computing Systems (TOMPECS), ISSN 2376-3639, Vol. 3, no 4, p. 16:1-16:23, article id 16Article in journal (Refereed)

The emerging device-to-device communication solutions and the abundance of mobile applications and services make opportunistic networking not only a feasible solution but also an important component of future wireless networks. Specifically, the distribution of locally relevant content could be based on the community of mobile users visiting an area, if long-term content survival can be ensured this way. In this article, we establish the conditions of content survival in such opportunistic networks, considering the user mobility patterns, as well as the time users keep forwarding the content, as the controllable system parameter.

We model the content spreading with an epidemic process, and derive a stochastic differential equations based approximation. By means of stability analysis, we determine the necessary user contribution to ensure content survival. We show that the required contribution from the users depends significantly on the size of the population, that users need to redistribute content only in a short period within their stay, and that they can decrease their contribution significantly in crowded areas. Hence, with the appropriate control of the system parameters, opportunistic content sharing can be both reliable and sustainable.

1234 51 - 100 of 151
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf