Loki: A System for Serving ML Inference Pipelines with Hardware and Accuracy Scaling
PROCEEDINGS OF THE 33RD INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE PARALLEL AND DISTRIBUTED COMPUTING, HPDC 2024(2024)
Key words
Inference Serving,Model Serving,Inference Pipelines,Machine Learning,Autoscaling
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined