Abstract
Optimizing the quality of machine learning (ML) services for individual consumers with specific objectives is crucial for improving consumer satisfaction. In this context, end-to-end ensemble ML serving (EEMLS) faces many challenges in selecting and deploying ensembles of ML models on diverse resources across the edge-cloud continuum. This paper provides a method for evaluating the runtime performance of inference services via consumer-defined metrics. We enable ML consumers to define high-level metrics and consider consumer satisfaction in estimating service costs. Moreover, we introduce a time-efficient ensemble selection algorithm to optimize the EEMLS with intricate trade-offs between service quality and costs. Our intensive experiments demonstrate that the algorithm can be executed periodically despite the extensive search space, enabling dedicated optimization for individual consumers in dynamic contexts.
Original language | English |
---|---|
Title of host publication | 2024 IEEE/ACM 17th International Conference on Utility and Cloud Computing (UCC) |
Number of pages | 6 |
Publication status | Accepted/In press - 5 Nov 2024 |
MoE publication type | A4 Conference publication |
Event | IEEE/ACM International Conference on Utility and Cloud Computing - The University of Sharjah, Sharjah, United Arab Emirates Duration: 16 Dec 2024 → 19 Dec 2024 Conference number: 17 https://www.uccbdcat2024.org/ucc/ |
Conference
Conference | IEEE/ACM International Conference on Utility and Cloud Computing |
---|---|
Abbreviated title | UCC |
Country/Territory | United Arab Emirates |
City | Sharjah |
Period | 16/12/2024 → 19/12/2024 |
Internet address |
Keywords
- ML Serving
- Ensemble Selection
- Ensemble ML
- End-to-End ML
- Performance Evaluation