Home            Contact us            FAQs
    
      Journal Home      |      Aim & Scope     |     Author(s) Information      |      Editorial Board      |      MSP Download Statistics

     Research Journal of Applied Sciences, Engineering and Technology


Workload Known VMM Scheduler for Server Consolidation for Enterprise Cloud Data Center

1S. Suresh and 2S. Sakthivel
1Department of Computer Science and Engineering, Adhiyamaan College of Engineering, Hosur-635109
2Department of Computer Science and Engineering, Sona College of Technology, TPTC Main Road, Salem-636005, Tamil Nadu, India
Research Journal of Applied Sciences, Engineering and Technology  2015  5:380-390
http://dx.doi.org/10.19026/rjaset.9.1417  |  © The Author(s) 2015
Received: September ‎22, ‎2014  |  Accepted: November ‎10, ‎2014  |  Published: February 15, 2015

Abstract

This study proposes a novel adaptive meta-heuristics based scheduling policies for provisioning the VCPU resources among competing VM service domains in a cloud. Such provisioning guarantees to Service Level Agreement for each domain, with respect to the diverse workloads on-the-fly. The framework is built on CSIM models and tools, making it easy to understand and configure various virtualization setups. The study demonstrates the usefulness of the framework by evaluating proactive, reactive and adaptive VCPU scheduling algorithms. The paper evaluates how periodic/aperiodic execution of control actions can affect policy performance and speed of convergence. The periodic reactive resource allocation is used as the baseline for analysis and the average response time is the performance metric. Simulation based experiments using variety of real-world arrival traces and synthetic workloads results that the proposed provisioning technique detects changes in arrival pattern and resource demands and allocates resources accordingly to achieve application SLA targets. The proposed model improves CPU utilization and makes the best tradeoff between resource utilization and performance from 2 to 6% comparing with the default VMM scheduler configurations for diverse workloads. In addition, the results of the experiments show that the proposed Weighed Moving Average algorithm combined with the aperiodic policy significantly outperforms other dynamic VM consolidation algorithms in all cases, in regard to the SLA metric due to a substantially reduced level of response time violations and the frequency of algorithm invocation.

Keywords:

Cloud computing, cloud workload, server consolidation, server virtualization, simulation, VMM scheduling,


References

  1. Armbrust, M., A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica and M. Zaharia, 2010. A view of cloud computing. Commun. ACM, 53(1): 50-58.
    CrossRef    
  2. Cherkasova, L., D. Gupta and A. Vahdat, 2007. Comparison of the three CPU schedulers in Xen. SIGMETRICS Perform. Eval. Rev., 35(2): 42-51.
    CrossRef    
  3. Chia-Ying, T. and L. Kang-Yuan, 2013. A modified priority based CPU scheduling scheme for virtualized environment. Int. J. Hybrid Inform. Technol., 6(2).
  4. Chisnall, D., 2007. The Definitive Guide to the Xen Hypervisor. 1st Edn., Prentice Hall Press, Upper Saddle River, NJ, USA.
    PMid:18096236    
  5. Cillendo, E. and T. Kunimasa, 2007. Linux Performance and Tuning Guidelines. Redpaper, IBM.
  6. Herbst, N.R., N. Huber, S. Kounev and E. Amrehn, 2013. Self-adaptive workload classification and forecasting for proactive resource provisioning. Proceeding of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE’13). Czech Republic, Prague.
    CrossRef    
  7. Huber, N., B. Fabian and K. Samuel, 2011a. Model-based self-adaptive resource allocation in virtualized environments. Proceeding of the 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’11), pp: 90-99.
    CrossRef    
  8. Huber, N., M. Quast, M.V. Hauck and S. Kounev, 2011b. Evaluating and modeling virtualization performance overhead for cloud environments. Proceeding of the International Conference on Cloud Computing and Services Science (CLOSER 2011). SciTePress, Noordwijkerhout, Netherlands, pp: 563-573.
  9. ITA, 1998. The Internet Traces Archives: WorldCup98.
    Direct Link
  10. Jung, G., M. Hiltunen, K. Joshi, R. Schlichting and C. Pu, 2010. Mistral: Dynamically managing power, performance and adaptation cost in cloud infrastructures. Proceeding of IEEE 30th International Conference on Distributed Computing Systems (ICDCS, 2010), pp: 62-73.
  11. Kousiouris, G., T. Cucinotta and T. Varvarigou, 2011. The effects of scheduling, workload type and consolidation scenarios on virtual machine performance and their prediction through optimized artificial neural networks. J. Syst. Software, 84(8): 1270-1291.
    CrossRef    
  12. Lim, S.H., J.S. Huh, Y.J. Kim, G.M. Shipman and C.R. Das, 2010. A quantitative analysis of performance of shared service systems with multiple resource contention. Technical Report.
    Direct Link
  13. Liu, Z., W. Qu, W. Liu, Z. Li and Y. Xu, 2014. Resource preprocessing and optimal task scheduling in cloud computing environments. Concurr. Comp-Pract. E., DOI: 10.1002/cpe.3204.
    CrossRef    
  14. Lu, L., H. Zhang, G. Jiang, H. Chen, K. Yoshihira and E. Smirni, 2011. Untangling mixed information to calibrate resource utilization in virtual machines. Proceeding of the 8th ACM International Conference on Autonomic Computing, pp: 151-160.
    CrossRef    
  15. NLANR, 1995. National Laboratory for Applied Network Research. Anonymized access logs.
  16. Rao, J., Y. Wei, J. Gong and C.Z. Xu, 2013. QoS guarantees and service differentiation for dynamic cloud applications. IEEE T. Network Serv. Manage, 10(1).
  17. Schwetman, H., 2001. CSIM19: A powerful tool for building system models. Proceeding of the 2001 Winter Simulation Conference, pp: 250-255.
    CrossRef    
  18. Schwiegeishohn, U. and R. Yahyapour, 1998. Improving first-come-first-serve job scheduling by gang scheduling. In: Feitelson, D.G. and L. Rudolph (Eds.), JSSPP’98. LNCS 1459, Springer-verlag, Berlin, Heidelberg, pp: 180-198.
    CrossRef    
  19. Sethi, S., A. Sahu and S.K. Jena, 2012. Efficient load balancing in cloud computing using fuzzy logic. IOSR J. Eng., 2(7): 65-71.
    CrossRef    
  20. Sivanandam, S.N. and S.N. Deepa, 2007. Introduction to Genetic Algorithms. 2nd Edn., Springer-Verlag, New York.
    PMCid:PMC2291551    
  21. Smith, J.E. and R. Nair, 2005. Virtual Machines: Versatile Platforms for Systems and Processes. Morgan Kaufmann [Imprint], San Diego.
  22. Sukwong, O. and H.S. Kim, 2011. Is co-scheduling too expensive for SMP VMs? Proceeding of the 6th conference on Computer Systems (EuroSys '11), pp: 257-272.
    CrossRef    
  23. Suresh, S. and M. Kannan, 2014a. A study on system virtualization techniques. Proceeding of the International Conference on HI-TECh Trends in Emerging Computational Technology (ICECT, 2014). Virudhunagar, Tamilnadu, India.
  24. Suresh, S. and M. Kannan, 2014b. A performance study of hardware impact on full virtualization for server consolidation in cloud environment. J. Theor. Appl. Inform. Technol., 60(3).
  25. Watson, B.J., M. Marwah, D. Gmach, Y. Chen, M. Arlitt and Z. Wang, 2010. Probabilistic performance modeling of virtualized resource allocation. Proceeding of the 7th International Conference on Autonomic Computing (ICAC’10), pp: 99-108.
    CrossRef    
  26. Weng, C., Z. Wang, M. Li and X. Lu, 2009. The hybrid scheduling framework for virtual machine systems. Proceeding of the 2009 ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments. New York, USA.
    CrossRef    

Competing interests

The authors have no competing interests.

Open Access Policy

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Copyright

The authors have no competing interests.

ISSN (Online):  2040-7467
ISSN (Print):   2040-7459
Submit Manuscript
   Information
   Sales & Services
Home   |  Contact us   |  About us   |  Privacy Policy
Copyright © 2024. MAXWELL Scientific Publication Corp., All rights reserved