Abstract: Inference of large language models (LLMs) is common in cloud environments. As the elastic resource management capabilities and the flexible pay-as-you-go billing model offered by serverless, ...
Abstract: Traditional federated learning (FL) architectures are vulnerable to significant security risks. Centralized servers create single points of failure and are susceptible to adversarial attacks ...