kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Hierarchical Inference at the Edge: A Batch Processing Approach
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0009-0005-4420-8262
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-2739-5060
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0001-7220-5353
University of Victoria, Computer Science, British Columbia, Canada.
Show others and affiliations
2024 (English)In: Proceedings - 2024 IEEE/ACM Symposium on Edge Computing, SEC 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 476-482Conference paper, Published paper (Refereed)
Abstract [en]

Deep learning (DL) applications have rapidly evolved to address increasingly complex tasks by leveraging large-scale, resource-intensive models. However, deploying such models on low-power devices is not practical or economically scalable. While cloud-centric solutions satisfy these computational demands, they present challenges in terms of communication costs and latencies for real-Time applications when every computation task is offloaded. To mitigate these concerns, hierarchical inference (HI) frameworks have been proposed, enabling edge devices equipped with small ML models to collaborate with edge servers by selectively offloading complex tasks. Existing HI approaches depend on immediate offloading of data upon selection, which can lead to inefficiencies due to frequent communication, especially in time-varying wireless environments. In this work, we introduce Batch HI, an approach that offloads samples in batches, thereby reducing communication overhead and improving system efficiency while achieving similar performance as existing HI methods. Additionally, we find the optimal batch size that attains a crucial balance between responsiveness and system time, tailored to specific user requirements. Numerical results confirm the effectiveness of our approach, highlighting the scenarios where batching is particularly beneficial.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2024. p. 476-482
Keywords [en]
batching, edge computing, Hierarchical inference, offloading decisions, regret bound, responsiveness, tiny ML
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:kth:diva-359857DOI: 10.1109/SEC62691.2024.00055ISI: 001424939400046Scopus ID: 2-s2.0-85216793011OAI: oai:DiVA.org:kth-359857DiVA, id: diva2:1937166
Conference
9th Annual IEEE/ACM Symposium on Edge Computing, SEC 2024, Rome, Italy, December 4-7, 2024
Note

Part of ISBN 979-8-3503-7828-3

QC 20250213

Available from: 2025-02-12 Created: 2025-02-12 Last updated: 2025-04-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Letsioue, AfroditiMoothedath, Vishnu NarayananBeherae, Adarsh PrasadGross, James

Search in DiVA

By author/editor
Letsioue, AfroditiMoothedath, Vishnu NarayananBeherae, Adarsh PrasadGross, James
By organisation
Information Science and Engineering
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 59 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf