SLA-aware Network Scheduling for Cloud Computing

Transcription

SLA-aware Network Scheduling for Cloud Computing
SLA-aware Network Scheduling for Cloud Computing
1.
Kyungwoon Lee
Cheol-ho Hong
Chuck Yoo
Korea University
kwlee@os.korea.ac.kr
Korea University
chhong@os.korea.ac.kr
Korea University
hxy@os.korea.ac.kr
Motivation
sources. If the remaining resources of a VM exceed a certain threshold, the scheduler considers that a network application of that VM
consumes less network resources than what are given, and hence
yields the exceeded resources to other VMs. Through the distribution of surplus resources, an SLA-aware network scheduler enhances resource utilization.
As the demand for cloud computing increases, the importance of
supporting a Service Level Agreement (SLA) is emphasized. An
SLA is a contract between cloud consumers and providers that is
crucial in terms of service quality. Cloud providers can achieve the
SLA objective through proper resource management. In order to
guarantee SLA in recent cloud environments, several studies have
been presented [1, 3]. However, previous studies only provide a single performance policy (e.g. proportional sharing) and static performance. These approaches cannot fully support various workloads
with each having different performance requirements and fluctuating resource demands [2]. Therefore, multiple performance policies
and dynamic resource allocation should be provided concurrently
in order to satisfy the workloads’ various demands. We propose an
SLA-aware network scheduler that provides multiple performance
policies to support the various performance requirements of each
consumer. In addition, the scheduler maximizes resource utilization by efficiently distributing available resources.
2.
3.
Evaluation
We evaluate the benefits of a SLA-aware network scheduler and
observe how network performance changes when we apply different performance policies to VMs. Figs. 1 to 3 (in the Poster) are the
evaluation results that present VMs’ performance changes according to different performance policies. The evaluation results show
that our scheduler successfully manages the network performance
of each VM, and always utilizes network resources 100%.
4.
Conclusion
We presented an SLA-aware network scheduler that differentiates
the network performance of VMs with multiple performance policies. In this work, we only considered bandwidth to determine network performance. In future work, we plan to add performance
metric, latency. Moreover, we plan to expand current mechanism
that recognizes SLA configuration not only per VM but also per
application.
Design of SLA-aware Network scheduler
An SLA-aware network scheduler is based on a dynamic resource
allocation algorithm that differentiates the network performance
of virtual machines (VMs) that run on a physical machine. First,
in order to support various requirements, our scheduler provides
multiple performance policies that determine VM performance:
weight-based proportional sharing, minimum bandwidth reservation, and maximum bandwidth limitation. Weight-based proportional sharing is a base policy and the proportion is in accordance
to VM weight, which can change during runtime. Minimum bandwidth reservation can prevent dramatic performance degradation
by guaranteeing network performance greater than a configured
value. Maximum bandwidth limitation can prevent aggressive resource consumption of a specific VM. Calculating the allocation of
network resources to a VM is executed by the network scheduler
according to a VM configuration indicating required performance
policies.
Second, an SLA-aware network scheduler enhances resource management efficiently by preventing unnecessary resource allocation
toward a VM not running a network workload. Our scheduler inspects the remaining resources whenever it allocates network re-
References
[1] F. Dan, W. Xiaojing, Z. Wei, T. Wei, and L. Jingning. vsuit: Qosoriented scheduler in network virtualization. In Advanced Information
Networking and Applications Workshops (WAINA), 2012 26th International Conference on, pages 423–428. IEEE, 2012.
[2] M. Y. Galperin and E. V. Koonin. Who’s your neighbor? new computational approaches for functional genomics. Nature biotechnology, 18
(6):609–613, 2000.
[3] L. Popa, G. Kumar, M. Chowdhury, A. Krishnamurthy, S. Ratnasamy,
and I. Stoica. Faircloud: sharing the network in cloud computing. In
Proceedings of the ACM SIGCOMM 2012 conference on Applications,
technologies, architectures, and protocols for computer communication, pages 187–198. ACM, 2012.
[Copyright notice will appear here once ’preprint’ option is removed.]
Poster abstract for Eurosys ’15
1
2015/4/10
SLA-aware Network Scheduling for Cloud Computing
Kyungwoon Lee, Cheol-Ho Hong, and Chuck Yoo
Korea University, Korea
Introduction
Design of SLA-aware Network Scheduler
 Increasing complexity of cloud computing
 Various and unpredictable workloads
→ Service quality guarantee is important
 Dynamic Resource Allocation Algorithm
 Manipulate network resource periodically
 Any change in configuration can be applied during runtime
 Corresponding policy is recognized using specific field value
 Service Level Agreement (SLA) in cloud computing
 A contract between cloud provider and consumer in terms of service quality
 Cloud providers can achieve a SLA objective through proper resource management
 Multiple performance policies are necessary in cloud computing
 Workloads have various performance requirements
(e.g. Downloading files: Large network bandwidth/Streaming video, VoIP: Low latency )
 Prevent abortive resource allocation
 Multiple Performance Policy
 Inspect resource usage of every VMs
 Distribute remaining resource to other VMs
 Weight-based Proportional Sharing
 Maximum Bandwidth Limitation
 Minimum Bandwidth Reservation
VM1
VM3
VM2
vNIC
vNIC
Check corresponding
Performance policy
VM4
vNIC
vNIC
Policy1. Weight-based PS
Policy2. Maximum BW
Hypervisor
 Different performance policies should be applied accordingly
Hardware
→ SLA-aware Network Scheduler
SLA-aware
Request Queue
NS
Policy3. Minimum BW
Resource usage
Inspection
Deliver Request
NIC
Determine the amount of
resources allocating
Preliminary Results
Future work
 System setup
 Directly connected 2 Physical machines w/1Gbps Ethernet
 Bandwidth measurement : Iperf benchmark
 Expanding performance metric
 Evaluation Methodology
 Sequential generation of Virtual Machines(VMs)
 Change performance policy of VMs
CPU
Memory
OS
Target Machine
Intel i5-2500 3.3Ghz
DDR3 8GB
Xen-4.2.1
Performance evaluator
Intel i7-3770k 3.5Ghz
DDR3 16GB
Linux 3.11.10
 Evaluation Results
Fig.1 Weight-base Proportional Sharing
Fig.2 Maximum Bandwidth Limitation
(Weight-VM1:VM2:VM3:VM4:VM5=1:2:3:4:5) (VM1=500Mbps, other VMs=same as Fig.1)
http://os.korea.ac.kr
Fig.3 Minimum Bandwidth Reservation
(VM1=200Mbps, other VMs=same as Fig.1)
 Fig.1 shows dynamically changing
performance according to the
weight of VMs
 Fig.2 demonstrates that aggressive
resource consumption of a specific
VM can be prevented
 Fig.3 shows that minimum
performance can be provided
independent of the number of VMs
 Now only deal with “Bandwidth”
 Additional performance metric
→ “Latency”
 Support for both metrics concurrently
 Fine-grained performance
management
 Now 1 network application in a VM
 Control multiple network applications
on a VM simultaneously
This work was supported by the National Research Foundation of
Korea (NRF) grant funded by the Korea government (MEST)
(No.2010- 0029180) with KREONET