kth.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 23) Show all publications
Sousa, D. P., Du, R., Barros da Silva Jr., J. M., Cavalcante, C. C. & Fischione, C. (2023). Leakage detection in water distribution networks using machine-learning strategies. Water Science and Technology: Water Supply, 23(3), 1115-1126
Open this publication in new window or tab >>Leakage detection in water distribution networks using machine-learning strategies
Show others...
2023 (English)In: Water Science and Technology: Water Supply, ISSN 1606-9749, E-ISSN 1607-0798, Vol. 23, no 3, p. 1115-1126Article in journal (Refereed) Published
Abstract [en]

This work proposes a reliable leakage detection methodology for water distribution networks (WDNs) using machine-learning strategies. Our solution aims at detecting leakage in WDNs using efficient machine-learning strategies. We analyze pressure measurements from pumps in district metered areas (DMAs) in Stockholm, Sweden, where we consider a residential DMA of the water distribution network. Our proposed methodology uses learning strategies from unsupervised learning (K-means and cluster validation techniques), and supervised learning (learning vector quantization algorithms). The learning strategies we propose have low complexity, and the numerical experiments show the potential of using machine-learning strategies in leakage detection for monitored WDNs. Specifically, our experiments show that the proposed learning strategies are able to obtain correct classification rates up to 93.98%.

Place, publisher, year, edition, pages
IWA Publishing, 2023
Keywords
clustering, leakage detection, machine-learning, supervised learning, unsupervised learning, water distribution network
National Category
Computer Sciences Water Engineering
Identifiers
urn:nbn:se:kth:diva-330902 (URN)10.2166/ws.2023.054 (DOI)000936903600001 ()2-s2.0-85153259498 (Scopus ID)
Note

QC 20230705

Available from: 2023-07-05 Created: 2023-07-05 Last updated: 2023-07-05Bibliographically approved
Du, R., Timoudas, T. O. & Fischione, C. (2022). Comparing Backscatter Communication and Energy Harvesting in Massive IoT Networks. IEEE Transactions on Wireless Communications, 21(1), 429-443
Open this publication in new window or tab >>Comparing Backscatter Communication and Energy Harvesting in Massive IoT Networks
2022 (English)In: IEEE Transactions on Wireless Communications, ISSN 1536-1276, E-ISSN 1558-2248, Vol. 21, no 1, p. 429-443Article in journal (Refereed) Published
Abstract [en]

Backscatter communication (BC) and radio-frequency energy harvesting (RF-EH) are two promising technologies for extending the battery lifetime of wireless devices. Although there have been some qualitative comparisons between these two technologies, quantitative comparisons are still lacking, especially for massive IoT networks. In this paper, we address this gap in the research literature, and perform a quantitative comparison between BC and RF-EH in massive IoT networks with multiple primary users and multiple low-power devices acting as secondary users. An essential feature of our model is that it includes the interferences caused by the secondary users to the primary users, and we show that these interferences significantly impact the system performance of massive IoT networks. For the RF-EH model, the power requirements of digital-to-analog and signal amplification are taken into account. We pose and solve a power minimization problem for BC, and we show analytically when BC is better than RF-EH. The results of the numerical simulations illustrate the significant benefits of using BC in terms of saving power and supporting massive IoT, compared to using RF-EH. The results also show that the backscatter coefficients of the BC devices must be individually tunable, in order to guarantee good performance of BC.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Backscatter, Radio transmitters, Wireless communication, Base stations, Performance evaluation, Radio frequency, Interference, Backscatter communication, energy harvesting, internet of Things, power optimization
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-307265 (URN)10.1109/TWC.2021.3096800 (DOI)000740005900033 ()2-s2.0-85111012818 (Scopus ID)
Note

QC 20220120

Available from: 2022-01-20 Created: 2022-01-20 Last updated: 2022-06-25Bibliographically approved
Sousa, D. P., Du, R., B. da Silva Jr., J. M., Cavalcante, C. C. & Fischione, C. (2022). Leakage Detection In Water Distribution Networks: Efficient Training By Data Clustering. In: IWA World Water Congress & Exhibition, Sep. 2022: . Paper presented at IWA World Water Congress & Exhibition, 11-15 September 2022 Bella Center | Copenhagen, Denmark. IWA Publishing
Open this publication in new window or tab >>Leakage Detection In Water Distribution Networks: Efficient Training By Data Clustering
Show others...
2022 (English)In: IWA World Water Congress & Exhibition, Sep. 2022, IWA Publishing, 2022Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

This work proposes a reliable leakage detection methodology for water distribution networks based on machine learning techniques. The design is developed through real data acquisition from a municipal area of a water distribution network. We propose to combine both unsupervised learning (K-means and cluster validation techniques) and supervised learning (LVQ-type algorithms) for the efficient design of prototype-based classifiers. We investigated several metrics aiming to define the optimal number of clusters, in which we succeeded in reporting attractive classification accuracies (approximately 90%) on scenarios of severely limited number of prototypes.

Place, publisher, year, edition, pages
IWA Publishing, 2022
Keywords
learning vector quantization, water monitoring, clustering, unsupervised learning
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-313005 (URN)
Conference
IWA World Water Congress & Exhibition, 11-15 September 2022 Bella Center | Copenhagen, Denmark
Projects
Mistra-InfraMaint ATITAN
Funder
Mistra - The Swedish Foundation for Strategic Environmental Research
Note

QC 20221011

Available from: 2022-05-27 Created: 2022-05-27 Last updated: 2024-03-15Bibliographically approved
Ohlson Timoudas, T., Du, R. & Fischione, C. (2020). Enabling Massive IoT in Ambient Backscatter Communication Systems. In: ICC 2020 - 2020  IEEE International Conference on Communications (ICC): . Paper presented at 2020 IEEE International Conference on Communications, ICC 2020; Convention Centre DublinDublin; Ireland; 7 June 2020 through 11 June 2020. IEEE, Article ID 9149022.
Open this publication in new window or tab >>Enabling Massive IoT in Ambient Backscatter Communication Systems
2020 (English)In: ICC 2020 - 2020  IEEE International Conference on Communications (ICC), IEEE, 2020, article id 9149022Conference paper, Published paper (Refereed)
Abstract [en]

Backscatter communication is a promising solution for enabling information transmission between ultra-low-power devices, but its potential is not fully understood. One major problem is dealing with the interference between the backscatter devices, which is usually not taken into account, or simply treated as noise in the cases where there are a limited number of backscatter devices in the network. In order to better understand this problem in the context of massive IoT (Internet of Things), we consider a network with a base station having one antenna, serving one primary user, and multiple IoT devices, called secondary users. We formulate an optimization problem with the goal of minimizing the needed transmit power for the base station, while the ratio of backscattered signal, called backscatter coefficient, is optimized for each of the IoT devices. Such an optimization problem is non-convex and thus finding an optimal solution in real-time is challenging. In this paper, we prove necessary and sufficient conditions for the existence of an optimal solution, and show that it is unique. Furthermore, we develop an efficient solution algorithm, only requiring solving a linear system of equations with as many unknowns as the number of secondary users. The simulation results show a lower energy outage probability by up to 40-80 percentage points in dense networks with up to 150 secondary users. To our knowledge, this is the first work that studies backscatter communication in the context of massive IoT, also taking into account the interference between devices.

Place, publisher, year, edition, pages
IEEE, 2020
Series
IEEE International Conference on Communications, ISSN 1550-3607
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-292370 (URN)10.1109/ICC40277.2020.9149022 (DOI)000606970302102 ()2-s2.0-85089427312 (Scopus ID)
Conference
2020 IEEE International Conference on Communications, ICC 2020; Convention Centre DublinDublin; Ireland; 7 June 2020 through 11 June 2020
Note

QC 20210407

Available from: 2021-04-07 Created: 2021-04-07 Last updated: 2022-06-25Bibliographically approved
Du, R., Magnusson, S. & Fischione, C. (2020). The Internet of Things as a Deep Neural Network. IEEE Communications Magazine, 58(9), 20-25
Open this publication in new window or tab >>The Internet of Things as a Deep Neural Network
2020 (English)In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 58, no 9, p. 20-25Article in journal (Refereed) Published
Abstract [en]

An important task in the Internet of Things (IoT) is field monitoring, where multiple IoT nodes take measurements and communicate them to the base station or the cloud for processing, inference, and analysis. When the measurements are high-dimensional (e.g., videos or time-series data), IoT networks with limited bandwidth and low-power devices may not be able to support such frequent transmissions with high data rates. To ensure communication efficiency, this article proposes to model the measurement compression at IoT nodes and the inference at the base station or cloud as a deep neural network (DNN). We propose a new framework where the data to be transmitted from nodes are the intermediate outputs of a layer of the DNN. We show how to learn the model parameters of the DNN and study the trade-off between the communication rate and the inference accuracy. The experimental results show that we can save approximately 96 percent transmissions with only a degradation of 2.5 percent in inference accuracy, which shows the potentiality to enable many new IoT data analysis applications that generate a large amount of measurements.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-284376 (URN)10.1109/MCOM.001.2000015 (DOI)000576253200005 ()2-s2.0-85092455562 (Scopus ID)
Note

QC 20201022

Available from: 2020-10-22 Created: 2020-10-22 Last updated: 2024-03-15Bibliographically approved
Du, R., Shokri-Ghadikolaei, H. & Fischione, C. (2020). Wirelessly-Powered Sensor Networks: Power Allocation for Channel Estimation and Energy Beamforming. IEEE Transactions on Wireless Communications, 19(5), 2987-3002
Open this publication in new window or tab >>Wirelessly-Powered Sensor Networks: Power Allocation for Channel Estimation and Energy Beamforming
2020 (English)In: IEEE Transactions on Wireless Communications, ISSN 1536-1276, E-ISSN 1558-2248, Vol. 19, no 5, p. 2987-3002Article in journal (Refereed) Published
Abstract [en]

Wirelessly-powered sensor networks (WPSNs) are becoming increasingly important in different monitoring applications. We consider a WPSN where a multiple-antenna base station, which is dedicated for energy transmission, sends pilot signals to estimate the channel state information and consequently shapes the energy beams toward the sensor nodes. Given a fixed energy budget at the base station, in this paper, we investigate the novel problem of optimally allocating the power for the channel estimation and for the energy transmission. We formulate this non-convex optimization problem for general channel estimation and beamforming schemes that satisfy some qualification conditions. We provide a new solution approach and a performance analysis in terms of optimality and complexity. We also present a closed-form solution for the case where the channels are estimated based on a least square channel estimation and a maximum ratio transmit beamforming scheme. The analysis and simulations indicate a significant gain in terms of the network sensing rate, compared to the fixed power allocation, and the importance of improving the channel estimation efficiency.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2020
Keywords
Wirelessly-powered sensor network, wireless energy transfer, power allocation, channel acquisition, non-linear energy harvesting
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-276914 (URN)10.1109/TWC.2020.2969659 (DOI)000536297700007 ()2-s2.0-85084919505 (Scopus ID)
Note

QC 20200622

Available from: 2020-06-22 Created: 2020-06-22 Last updated: 2022-06-26Bibliographically approved
Zeng, M., Du, R., Fodor, V. & Fischione, C. (2019). Computation Rate Maximization for Wireless Powered Mobile Edge Computing with NOMA. In: Proceedings 20th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (IEEE WoWMoM 2019): . Paper presented at 20th IEEE International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM), Washington, DC, JUN 10-12, 2019. IEEE
Open this publication in new window or tab >>Computation Rate Maximization for Wireless Powered Mobile Edge Computing with NOMA
2019 (English)In: Proceedings 20th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (IEEE WoWMoM 2019), IEEE , 2019Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we consider a mobile edge computing (MEC) network, that is wirelessly powered. Each user harvests wireless energy and follows a binary computation offloading policy, i.e., it either executes the task locally or offloads it to the MEC as a whole. For the offloading users, non-orthogonal multiple access (NOMA) is adopted for information transmission. We consider rate-adaptive computational tasks and aim at maximizing the sum computation rate of all users by jointly optimizing the individual computing mode selection (local computing or offloading), the time allocations for energy transfer and for information transmission, together with the local computing speed or the transmission power level. The major difficulty of the rate maximization problem lies in the combinatorial nature of the multiuser computing mode selection and its involved coupling with the time allocation. We also study the case where the offloading users adopt time division multiple access (TDMA) as a benchmark, and derive the optimal time sharing among the users. We show that the maximum achievable rate is the same for the TDMA and the NOMA system, and in the case of NOMA it is independent from the decoding order, which can be exploited to improve system fairness. To maximize the sum computation rate, for the mode selection we propose a greedy solution based on the wireless channel gains, combined with the optimal allocation of energy transfer time. Numerical results show that the proposed solution maximizes the computation rate in homogeneous networks, and binary offloading leads to significant gains. Moreover, NOMA increases the fairness of rate distribution among the users significantly, when compared with TDMA.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-264864 (URN)10.1109/WoWMoM.2019.8792997 (DOI)000494803500029 ()2-s2.0-85071470115 (Scopus ID)
Conference
20th IEEE International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM), Washington, DC, JUN 10-12, 2019
Note

QC 20191217

Part of ISBN 978-1-7281-0270-2; 978-1-7281-0271-9

Available from: 2019-12-17 Created: 2019-12-17 Last updated: 2024-10-15Bibliographically approved
Du, R., Santi, P., Xiao, M., Vasilakos, A. & Fischione, C. (2019). The sensable city: A survey on the deployment and management for smart city monitoring. IEEE Communications Surveys and Tutorials, 21(2), 1533-1560
Open this publication in new window or tab >>The sensable city: A survey on the deployment and management for smart city monitoring
Show others...
2019 (English)In: IEEE Communications Surveys and Tutorials, E-ISSN 1553-877X, Vol. 21, no 2, p. 1533-1560Article in journal (Refereed) Published
Abstract [en]

In last two decades, various monitoring systems have been designed and deployed in urban environments, toward the realization of the so called smart cities. Such systems are based on both dedicated sensor nodes, and ubiquitous but not dedicated devices such as smart phones and vehicles' sensors. When we design sensor network monitoring systems for smart cities, we have two essential problems: node deployment and sensing management. These design problems are challenging, due to large urban areas to monitor, constrained locations for deployments, and heterogeneous type of sensing devices. There is a vast body of literature from different disciplines that have addressed these challenges. However, we do not have yet a comprehensive understanding and sound design guidelines. This paper addresses such a research gap and provides an overview of the theoretical problems we face, and what possible approaches we may use to solve these problems. Specifically, this paper focuses on the problems on both the deployment of the devices (which is the system design/configuration part) and the sensing management of the devices (which is the system running part). We also discuss how to choose the existing algorithms in different type of monitoring applications in smart cities, such as structural health monitoring, water pipeline networks, traffic monitoring. We finally discuss future research opportunities and open challenges for smart city monitoring.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-240367 (URN)10.1109/COMST.2018.2881008 (DOI)000470838000020 ()2-s2.0-85056584676 (Scopus ID)
Note

QC 20190107

Available from: 2018-12-17 Created: 2018-12-17 Last updated: 2024-02-27Bibliographically approved
Du, R., Gkatzikis, L., Fischione, C. & Xiao, M. (2018). On Maximizing Sensor Network Lifetime by Energy Balancing. IEEE Transactions on Control of Network Systems, 5(3)
Open this publication in new window or tab >>On Maximizing Sensor Network Lifetime by Energy Balancing
2018 (English)In: IEEE Transactions on Control of Network Systems, E-ISSN 2325-5870, Vol. 5, no 3Article in journal (Refereed) Published
Abstract [en]

Many physical systems, such as water/electricity distribution networks, are monitored by battery-powered wireless-sensor networks (WSNs). Since battery replacement of sensor nodes is generally difficult, long-term monitoring can be only achieved if the operation of the WSN nodes contributes to long WSN lifetime. Two prominent techniques to long WSN lifetime are 1) optimal sensor activation and 2) efficient data gathering and forwarding based on compressive sensing. These techniques are feasible only if the activated sensor nodes establish a connected communication network (connectivity constraint), and satisfy a compressive sensing decoding constraint (cardinality constraint). These two constraints make the problem of maximizing network lifetime via sensor node activation and compressive sensing NP-hard. To overcome this difficulty, an alternative approach that iteratively solves energy balancing problems is proposed. However, understanding whether maximizing network lifetime and energy balancing problems are aligned objectives is a fundamental open issue. The analysis reveals that the two optimization problems give different solutions, but the difference between the lifetime achieved by the energy balancing approach and the maximum lifetime is small when the initial energy at sensor nodes is significantly larger than the energy consumed for a single transmission. The lifetime achieved by energy balancing is asymptotically optimal, and that the achievable network lifetime is at least 50% of the optimum. Analysis and numerical simulations quantify the efficiency of the proposed energy balancing approach.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-185313 (URN)10.1109/TCNS.2017.2696363 (DOI)000445357100035 ()2-s2.0-85053762086 (Scopus ID)
Note

QC 20160420

Available from: 2016-04-15 Created: 2016-04-15 Last updated: 2022-06-23Bibliographically approved
Du, R., Xiao, M. & Fischione, C. (2018). Optimal Node Deployment and Energy Provision for Wirelessly Powered Sensor Networks. IEEE Journal on Selected Areas in Communications, 37(2), 407-423
Open this publication in new window or tab >>Optimal Node Deployment and Energy Provision for Wirelessly Powered Sensor Networks
2018 (English)In: IEEE Journal on Selected Areas in Communications, ISSN 0733-8716, E-ISSN 1558-0008, Vol. 37, no 2, p. 407-423Article in journal (Refereed) Published
Abstract [en]

In a typical wirelessly powered sensor network (WPSN), wireless chargers provide energy to sensor nodes by using wireless energy transfer (WET). The chargers can greatly improve the lifetime of a WPSN using energy beamforming by a proper charging scheduling of energy beams. However, the supplied energy still may not meet the demand of the energy of the sensor nodes. This issue can be alleviated by deploying redundant sensor nodes, which not only increase the total harvested energy, but also decrease the energy consumption per node provided that an efficient  scheduling of the sleep/awake of the nodes is performed. Such a problem of joint optimal sensor deployment, WET scheduling, and node activation is posed and investigated in this paper. The problem is an integer optimization that is challenging due to the binary decision variables and non-linear constraints. Based on the analysis of the necessary condition such that the WPSN be immortal, we decouple the original problem into a node deployment problem and a charging and activation scheduling problem. Then, we propose an algorithm and prove that it achieves the optimal solution under a mild condition. The simulation results show that the proposed algorithm reduces the needed nodes to deploy by approximately 16%, compared to a random-based approach. The simulation also shows if the battery buffers are large enough, the optimality condition will be easy to meet.

Place, publisher, year, edition, pages
IEEE Press, 2018
National Category
Communication Systems
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-235224 (URN)10.1109/JSAC.2018.2872380 (DOI)000457642100012 ()2-s2.0-85054262790 (Scopus ID)
Note

QC 20180919

Available from: 2018-09-18 Created: 2018-09-18 Last updated: 2022-06-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1934-9208

Search in DiVA

Show all publications