kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Imtiaz, S. (2025). Machine Learning Based Resource Allocation for Future Wireless Networks. (Doctoral dissertation). Stockholm, Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>Machine Learning Based Resource Allocation for Future Wireless Networks
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Efficient resource allocation is a critical challenge in future wireless networks, particularly as user demands, network densities and network complexities continue to grow. Traditionally, channel state information (CSI) of the user terminals is utilized for resource allocation. However, with increased network density and taking into account the existence of mobile users, CSI-based resource allocation poses significant performance overhead. This work explores a novel approach to resource allocation by leveraging machine learning models trained on user coordinate information. Specifically, we formulate the resource allocation problem in three ways: (1) modulation and coding scheme (MCS) prediction for transport capacity maximization, (2) resource allocation in noise-limited systems based on user positions, and (3) resource allocation in interference-limited systems to ensure fairness while maximizing capacity.We consider two user placement scenarios for performance evaluation: random drop scenario (RDS), where users are randomly distributed in the propagation environment, and mobility model scenario (MMS), where user positions follow a linear trajectory. 

We perform extensive evaluations to compare the datasets from RDS across key metrics, including the number of training samples, computational complexity, and model performance under varying channel conditions and erroneous position information. Our results demonstrate the viability of coordinates-based resource allocation through machine learning in adapting to complex wireless environments, achieving efficient and scalable resource allocation while maintaining robust performance under dynamic and imperfect conditions. Our proposed coordinates-based resource allocation scheme performs at par with the CSI-based resource allocation scheme, achieving at least 90% performance in an interference-limited system having changing scatterers' density. In addition, the scheme significantly outperforms the geometric-based resource allocation scheme, which intuitively applies the coordinates' information of users for distance-dependent resource allocation. The MMS dataset serves to determine the implementation cost of the proposed scheme, by considering a realistic channel model where the data samples are collected on a continual basis in the system. With this approach, we compare performance in terms of training time, prediction time, and memory footprint of the machine learning models. The results show that the coordinates-based resource allocation scheme can be used reliably for efficient resource allocation while incurring a low to moderate implementation cost for noise-limited and interference-limited system, respectively. This study highlights the potential of machine learning-driven resource management for future wireless networks, paving the way for intelligent, adaptive, and efficient communication systems.

Abstract [sv]

Effektiv resursallokering är en kritisk utmaning i framtida trådlösa nätverk, särskilt när användarkrav, nätverkstätheter och nätverkskomplexitet fortsätter att växa. Traditionellt används kanaltillståndsinformation (CSI) för användar-terminalerna för resursallokering. Men med ökad nätverkstäthet och med hänsyn till förekomsten av mobila användare, innebär CSI-baserad resursallokering betydande prestandakostnader.Detta arbete utforskar ett nytt tillvägagångssätt för resursallokering genom att utnyttja maskininlärnings-modeller som tränats på användarkoordinatinformation. Specifikt formulerar vi resursallokeringsproblemet på tre sätt: (1) modulerings- och kodningsschema (MCS) förutsägelse för transportkapacitetsmaximering, (2) resursallokering i bullerbegränsade system baserat på användarpositioner och (3) resursallokering i störningsbegränsade system för att säkerställa rättvisa samtidigt som kapaciteten maximeras. Vi överväger två scenarier för användarplacering för prestandautvärdering: slumpmässigt droppscenario (RDS), där användare fördelas slumpmässigt i spridningsmiljön, och mobilitetsmodellscenario (MMS), där användarpositioner följer en linjär bana.

Vi utför omfattande utvärderingar för att jämföra datamängder från RDS över nyckelmått, inklusive antalet träningsprover, beräkningskomplexitet och modellprestanda under varierande kanalförhållanden och felaktig positionsinformation. Våra resultat visar genomförbarheten av koordinatbaserad resursallokering genom maskininlärning för att anpassa sig till komplexa trådlösa miljöer, uppnå effektiv och skalbar resursallokering samtidigt som robust prestanda bibehålls under dynamiska och ofullkomliga förhållanden. Vårt föreslagna koordinatbaserade resursallokeringsschema presterar i paritet med det CSI-baserade resursallokeringsschemat, och uppnår minst 90% prestanda i ett störningsbegränsat system med förändrad spridardensitet. Dessutom överträffar schemat avsevärt det geometriskt baserade resursallokeringsschemat, som intuitivt tillämpar koordinaternas information om användare för avståndsberoende resursallokering.MMS-datauppsättningen tjänar till att bestämma implementeringskostnaden för det föreslagna schemat, genom att överväga en realistisk kanalmodell där dataproverna samlas in kontinuerligt i systemet. Med detta tillvägagångssätt jämför vi prestanda i form av träningstid, förutsägelsetid och minnesfotavtryck för maskininlärningsmodellerna. Resultaten visar att det koordinatbaserade resursallokeringsschemat kan användas på ett tillförlitligt sätt för effektiv resursallokering samtidigt som det medför en låg till måttlig implementeringskostnad för bullerbegränsade respektive störningsbegränsade system. Denna studie belyser potentialen hos maskininlärningsdriven resurshantering för framtida trådlösa nätverk, vilket banar väg för intelligenta, adaptiva och effektiva kommunikationssystem.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2025. p. ix, 180
Series
TRITA-EECS-AVL ; 2025:18
Keywords
Resource Allocation, Machine Learning, Wireless Communication Systems, resursfördelning, maskininlärning, trådlösa kommunikationssystem
National Category
Communication Systems
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-359405 (URN)978-91-8106-181-9 (ISBN)
Public defence
2025-02-27, https://kth-se.zoom.us/w/63341563274?tk=IJQu8Euz_YD8sl3kDLIxrxkOcD5zqA7i24342Dl_cEo.DQcAAAAOv3ONihZTRXQ3Z0s3dFRpQ2M1bnowemtOaVpBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA, F3, Lindstedtsvägen 26, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20250131

Available from: 2025-01-31 Created: 2025-01-31 Last updated: 2025-01-31Bibliographically approved
Imtiaz, S. & Gross, J. (2024). Machine Learning Based Fair Resource Allocation Leveraging User Coordinates in Multi-Antenna Systems.
Open this publication in new window or tab >>Machine Learning Based Fair Resource Allocation Leveraging User Coordinates in Multi-Antenna Systems
2024 (English)Manuscript (preprint) (Other academic)
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-359394 (URN)
Available from: 2025-01-30 Created: 2025-01-30 Last updated: 2025-01-31Bibliographically approved
Marcu, A.-D., Peesapati, S. K., Cortes, J. M., Imtiaz, S. & Gross, J. (2023). Explainable Artificial Intelligence for Energy-Efficient Radio Resource Management. In: 2023 IEEE Wireless Communications And Networking Conference, WCNC: . Paper presented at IEEE Wireless Communications and Networking Conference (WCNC), MAR 26-29, 2023, Glasgow, SCOTLAND. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Explainable Artificial Intelligence for Energy-Efficient Radio Resource Management
Show others...
2023 (English)In: 2023 IEEE Wireless Communications And Networking Conference, WCNC, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

As wireless systems evolve, the problems of radio resource management (RRM) become harder to solve. Once the additional constraint of energy-efficient utilization of resources is factored in, these problems become even more challenging. Thus, experts started developing solutions based on complex artificial intelligence (AI) models that, unfortunately, suffer from a performance-explainability trade-off. In this work, we propose an explainable AI (XAI) methodology for addressing this tradeoff. Our methodology can be used to generate feature importance explanations of AI models through three XAI methods: (i) Kernel SHapley Additive exPlanations (SHAP), (ii) Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models (CERTIFAI), and (iii) Anchors. For Anchors, we formulate a new feature importance score based on the feature's presence within the rules built by the method. We then use the generated explanations to improve the understanding of the model and reduce its complexity through a feature selection process. By applying our methodology to a reinforcement learning (RL) agent designed for energy-efficient RRM, we were able to reduce its complexity by approximately 27% according to various metrics, without losing performance. Additionally, we show the possibility to replace the AIbased inference process with an Anchors-based inference process with similar performance and higher interpretability for humans.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE Wireless Communications and Networking Conference, ISSN 1525-3511
Keywords
5G Networks, Energy Efficiency, Radio Resource Management, Reinforcement Learning, Explainable AI
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-329376 (URN)10.1109/WCNC55385.2023.10119130 (DOI)000989491900437 ()2-s2.0-85159784768 (Scopus ID)
Conference
IEEE Wireless Communications and Networking Conference (WCNC), MAR 26-29, 2023, Glasgow, SCOTLAND
Note

QC 20230620

Available from: 2023-06-20 Created: 2023-06-20 Last updated: 2023-06-20Bibliographically approved
Imtiaz, S., Schiessl, S., Koudouridis, G. P. & Gross, J. (2021). Coordinates-Based Resource Allocation Through Supervised Machine Learning. IEEE Transactions on Cognitive Communications and Networking, 7(4), 1347-1362
Open this publication in new window or tab >>Coordinates-Based Resource Allocation Through Supervised Machine Learning
2021 (English)In: IEEE Transactions on Cognitive Communications and Networking, E-ISSN 2332-7731, Vol. 7, no 4, p. 1347-1362Article in journal (Refereed) Published
Abstract [en]

Appropriate allocation of system resources is essential for meeting the increased user-traffic demands in the next generation wireless technologies. Traditionally, the system relies on channel state information (CSI) of the users for optimizing the resource allocation, which becomes costly for fast-varying channel conditions. In such cases, an estimate of the terminals' position information provides an alternative to estimating the channel condition. In this work, we propose a coordinates-based resource allocation scheme using supervised machine learning techniques, and investigate how efficiently this scheme performs in comparison to the traditional approach under various propagation conditions. We consider a simple system setup as a first step, where a single transmitter serves a single mobile user. The performance results show that the coordinates-based resource allocation scheme achieves a performance very close to the CSI-based scheme, even when the available user's coordinates are erroneous. The performance is quite consistent, especially when complex learning frameworks like random forest and neural network are used for resource allocation. In terms of applicability, a training time of about 4 s is required for coordinates-based resource allocation using random forest algorithm, and the appropriate resource allocation is predicted in less than 90 mu s with a learnt model of size <1 kB.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Wireless communication system, resource allocation, position information, machine learning
National Category
Telecommunications Signal Processing Communication Systems
Identifiers
urn:nbn:se:kth:diva-306759 (URN)10.1109/TCCN.2021.3072839 (DOI)000728144400029 ()2-s2.0-85104253160 (Scopus ID)
Note

QC 20211230

Available from: 2021-12-30 Created: 2021-12-30 Last updated: 2025-01-31Bibliographically approved
Imtiaz, S., Koudouridis, G. P. & Gross, J. (2019). On the feasibility of coordinates-based resource allocation through machine learning. In: 2019 IEEE Global Communications Conference, GLOBECOM 2019 - Proceedings: . Paper presented at 2019 IEEE Global Communications Conference, GLOBECOM 2019; Hilton Waikoloa Village ResortWaikoloa; United States; 9 December 2019 through 13 December 2019. Institute of Electrical and Electronics Engineers Inc., Article ID 9013883.
Open this publication in new window or tab >>On the feasibility of coordinates-based resource allocation through machine learning
2019 (English)In: 2019 IEEE Global Communications Conference, GLOBECOM 2019 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2019, article id 9013883Conference paper, Published paper (Refereed)
Abstract [en]

Over the last decade there has been a large research interest in exploiting terminal positions for various cellular network services and communication aspects. However, the relevance of terminal coordinates for resource allocation is relatively unexplored to date. In this work, we thus take a first step in that direction by studying coordinates-based resource allocation in an arguably favorable, and straightforward set-up. In particular, we consider the usage of supervised machine learning for resource allocation. Our results show that for the studied scenario, coordinates-based resource allocation can achieve a comparable performance to a CSI-based comparison scheme. While the main limiting factors are channel uncertainty as well as the accuracy of the terminal coordinates, in particular more complex machine learning schemes like Random Forests are able to provide some robustness despite the above mentioned noisy features. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Machine learning, Position coordinates, Resource allocation, Wireless communication system, Decision trees, Random forests, Supervised learning, Cellular network, Channel uncertainties, Complex machines, Research interests, Supervised machine learning, Terminal position, Learning systems
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-274110 (URN)10.1109/GLOBECOM38437.2019.9013883 (DOI)000552238604008 ()2-s2.0-85081971928 (Scopus ID)
Conference
2019 IEEE Global Communications Conference, GLOBECOM 2019; Hilton Waikoloa Village ResortWaikoloa; United States; 9 December 2019 through 13 December 2019
Note

QC 20200909

Part of ISBN 9781728109626

Available from: 2020-07-02 Created: 2020-07-02 Last updated: 2025-01-31Bibliographically approved
Imtiaz, S., Koudouridis, G. P., Ghauch, H. & Gross, J. (2018). Random forests for resource allocation in 5G cloud radio access networks based on position information. EURASIP Journal on Wireless Communications and Networking, 2018(1), Article ID 142.
Open this publication in new window or tab >>Random forests for resource allocation in 5G cloud radio access networks based on position information
2018 (English)In: EURASIP Journal on Wireless Communications and Networking, ISSN 1687-1472, E-ISSN 1687-1499, Vol. 2018, no 1, article id 142Article in journal (Refereed) Published
Abstract [en]

Next generation 5G cellular networks are envisioned to accommodate an unprecedented massive amount of Internet of things (IoT) and user devices while providing high aggregate multi-user sum rates and low latencies. To this end, cloud radio access networks (CRAN), which operate at short radio frames and coordinate dense sets of spatially distributed radio heads, have been proposed. However, coordination of spatially and temporally denser resources for larger sets of user population implies considerable resource allocation complexity and significant system signalling overhead when associated with channel state information (CSI)-based resource allocation (RA) schemes. In this paper, we propose a novel solution that utilizes random forests as supervised machine learning approach to determine the resource allocation in multi-antenna CRAN systems based primarily on the position information of user terminals. Our simulation studies show that the proposed learning based RA scheme performs comparably to a CSI-based scheme in terms of spectral efficiency and is a promising approach to master the complexity in future cellular networks. When taking the system overhead into account, the proposed learning-based RA scheme, which utilizes position information, outperforms legacy CSI-based scheme by up to 100%. The most important factor influencing the performance of the proposed learning-based RA scheme is antenna orientation randomness and position inaccuracies. While the proposed random forests scheme is robust against position inaccuracies and changes in the propagation scenario, we complement our scheme with three approaches that restore most of the original performance when facing random antenna orientations of the user terminal.

Place, publisher, year, edition, pages
Springer, 2018
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-238855 (URN)10.1186/s13638-018-1149-7 (DOI)000447851900004 ()2-s2.0-85048290841 (Scopus ID)
Note

QC 20181113

Available from: 2018-11-13 Created: 2018-11-13 Last updated: 2025-01-31Bibliographically approved
Imtiaz, S., Ghauch, H., Koudouridis, G. & Gross, J. (2018). Random Forests Resource Allocation for 5G Systems: Performance and Robustness Study. In: : . Paper presented at International Workshop on Big Data with Computational Intelligence for Wireless Networking (IEEE WCNC BDCIWN).
Open this publication in new window or tab >>Random Forests Resource Allocation for 5G Systems: Performance and Robustness Study
2018 (English)Conference paper, Published paper (Refereed)
National Category
Engineering and Technology
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-223418 (URN)10.1109/WCNCW.2018.8369028 (DOI)000442393300056 ()2-s2.0-85048899054 (Scopus ID)
Conference
International Workshop on Big Data with Computational Intelligence for Wireless Networking (IEEE WCNC BDCIWN)
Note

QC 20180327

Available from: 2018-02-21 Created: 2018-02-21 Last updated: 2025-01-31Bibliographically approved
Ghauch, H., Rahman, M. M., Imtiaz, S., Qvarfordt, C., Skoglund, M. & Gross, J. (2018). User Assignment in C-RAN Systems: Algorithms and Bounds. IEEE Transactions on Wireless Communications, 17(6), 3889-3902
Open this publication in new window or tab >>User Assignment in C-RAN Systems: Algorithms and Bounds
Show others...
2018 (English)In: IEEE Transactions on Wireless Communications, ISSN 1536-1276, E-ISSN 1558-2248, Vol. 17, no 6, p. 3889-3902Article in journal (Refereed) Published
Abstract [en]

In this paper, we investigate the problem of mitigating interference between so-called antenna domains of a cloud radio access network (C-RAN). In contrast to previous work, we turn to an approach utilizing primarily the optimal assignment of users to central processors in a C-RAN deployment. We formulate this user assignment problem as an integer optimization problem and propose an iterative algorithm for obtaining a solution. Motivated by the lack of optimality guarantees on such solutions, we opt to find lower bounds on the problem and the resulting interference leakage in the network. We thus derive the corresponding Dantzig-Wolfe decomposition, formulate the dual problem, and show that the former offers a tighter bound than the latter. We highlight the fact that the bounds in question consist of linear problems with an exponential number of variables and adapt the column generation method for solving them. In addition to shedding light on the tightness of the bounds in question, our numerical results show significant sum-rate gains over several comparison schemes. Moreover, the proposed scheme delivers similar performance as weighted minimum mean squared-error (MMSE) with a significantly lower complexity (around 10 times less).

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Cloud radio access networks, user assignment, interference coupling coefficients, block-coordinate descent, Dantzig-Wolfe decomposition
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-231722 (URN)10.1109/TWC.2018.2817223 (DOI)000435196200028 ()2-s2.0-85044848161 (Scopus ID)
Note

QC 20180814

Available from: 2018-08-14 Created: 2018-08-14 Last updated: 2022-06-26Bibliographically approved
Ghauch, H., Imtiaz, S., Skoglund, M., Koudouridis, G. & Gross, J. (2017). Fairness and User Assignment in Cloud-RAN. In: 2017 IEEE 86TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-FALL): . Paper presented at IEEE Vehicular Technology Conference (VTC Fall 2017).
Open this publication in new window or tab >>Fairness and User Assignment in Cloud-RAN
Show others...
2017 (English)In: 2017 IEEE 86TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-FALL), 2017Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we extend our previous work on user assignment in Cloud-RAN, where we proposed an algorithm for user assignment (UA). We motivate the inherent fairness issue that is present in the latter UA scheme, since some users in the system will never get served. To improve the fairness, we propose that the UA scheme is preceded by a user scheduling step which aims at selecting at any time the users that should be considered by the UA algorithm for scheduling (in the next time slot). Two user scheduling approaches have been studied. The first scheme improves the minimum throughput (MT), by selecting at any time the users with the lowest throughput. The second scheme is based on round-robin (RR) scheduling, where the set of potentially scheduled users for the next slot, is done by excluding all the previously served users, in that round. Moreover, the subset of actual users to be served, is determined using the UA algorithm. We evaluate their fairness and sumrate performance, via extensive simulations. While one might have expected a tradeoff between the sum-rate performance and fairness, our results show that MT improves both metrics, when compared to the original UA algorithm (without fairness), for some choice of parameter values. This implies that both fairness and aggregate system performance can be improved, by a careful choice of the number of assigned and served users.

National Category
Engineering and Technology
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-223415 (URN)10.1109/VTCFall.2017.8288047 (DOI)000428141600171 ()2-s2.0-85045234015 (Scopus ID)
Conference
IEEE Vehicular Technology Conference (VTC Fall 2017)
Note

QC 20180327

Available from: 2018-02-21 Created: 2018-02-21 Last updated: 2024-03-18Bibliographically approved
Ghauch, H., Ur Rahman, M., Imtiaz, S. & Gross, J. (2016). Coordination and Antenna Domain Formation in Cloud-RAN Systems. In: 2016 IEEE International Conference on Communications, ICC 2016: . Paper presented at 2016 IEEE International Conference on Communications, ICC 2016, Kuala Lumpur, Malaysia, 22 May 2016 through 27 May 2016. Institute of Electrical and Electronics Engineers (IEEE), Article ID 7511264.
Open this publication in new window or tab >>Coordination and Antenna Domain Formation in Cloud-RAN Systems
2016 (English)In: 2016 IEEE International Conference on Communications, ICC 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, article id 7511264Conference paper, Published paper (Refereed)
Abstract [en]

We study here the problem of Antenna Domain Formation (ADF) in cloud RAN systems, whereby multiple remote radio-heads (RRHs) are each to be assigned to a set of antenna domains (ADs), such that the total interference between the ADs is minimized. We formulate the corresponding optimization problem, by introducing the concept of interference coupling coefficients among pairs of radio-heads. We then propose a low-overhead algorithm that allows the problem to be solved in a distributed fashion, among the aggregation nodes (ANs), and establish basic convergence results. Moreover, we also propose a simple relaxation to the problem, thus enabling us to characterize its maximum performance. We follow a layered coordination structure: after the ADs are formed, radio-heads are clustered to perform coordinated beamforming using the well known Weighted-MMSE algorithm. Finally, our simulations show that using the proposed ADF mechanism would significantly increase the sum-rate of the system (with respect to random assignment of radio-heads).

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Series
IEEE International Conference on Communications, ISSN 1550-3607
Keywords
5G, Cloud RAN, radio head assignment, antenna domain formation, interference coupling, block coordination descent
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-184884 (URN)10.1109/ICC.2016.7511264 (DOI)000390993203135 ()2-s2.0-84981333316 (Scopus ID)978-1-4799-6664-6 (ISBN)
Conference
2016 IEEE International Conference on Communications, ICC 2016, Kuala Lumpur, Malaysia, 22 May 2016 through 27 May 2016
Funder
ICT - The Next Generation
Note

QC 20160407

Available from: 2016-04-06 Created: 2016-04-06 Last updated: 2024-03-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6864-6970

Search in DiVA

Show all publications