Load Balancing for Cloud Ecosystem using Energy Aware Application Scaling Methodologies

Page 1

International Research Journal of Engineering and Technology (IRJET)

e-ISSN: 2395 -0056

Volume: 04 Issue: 05 | May -2017

p-ISSN: 2395-0072

www.irjet.net

LOAD BALANCING FOR CLOUD ECOSYSTEM USING ENERGY AWARE APPLICATION SCALING METHODOLOGIES Asma Anjum1, Dr. Rekha Patil2 1PG

Student, Department of CSE, PDA College of Engineering, Kalaburagi, Karnataka, India. asmacs13@gmail.com 2Professor, Department of CSE, PDA College of Engineering, Kalaburagi, Karnataka, India. rekha.patilcse@gmail.com

---------------------------------------------------------------------***---------------------------------------------------------------------

Abstract - Cloud computing endeavors the utility oriented

previous two decades the performance of computing systems has increased more faster than their energy efficiency [3]. An ideal energy level is one when the performance per Watt of power is maximized in response to a request consistent with the SLA [2]. Scaling is the technique of allocating additional resources to a cloud application.

IT services to the users worldwide. Almost Nearly all of the companies are locating their data onto the cloud. Managing the data onto the servers and making it available to users as per their requirement with respect to the SLAs in an energy efficient manner is difficult. We explain an energy aware application scaling and load balancing operation model for cloud eco system. The main concept of our approach is taking into concept an energy-optimal operation regime and attempting to maximize the number of servers that are operating in this regime. Lightly-loaded and Idle servers are switched to one of the sleep states in order to save energy. The servers are being added in order to balance the load and avoid the deadlock or overload condition by deploying the scaling methodologies. Henceforth, we were capable to show how the load is balanced by adding the number of servers and maximizing the count of servers in order to serve the request of client. Allotting evenly the workload to a set of servers minimizes the response time , maximizes the throughput and increases the system resilience to faults preventing overloading the systems.

We distinguish two scaling modes, vertical and horizontal scaling. Horizontal scaling is the most common method of scaling on a cloud; it is done by increasing the number of Virtual Machines (VMs) when the load of applications increases and minimizing this number when the load decreases. Vertical scaling keeps the number of VMs of an application constant, but maximizes the amount of resources that is allocated to each one of them. Scaling is the ability of the system, process or network to handle the situation to handle the growing network or its capability to be enlarged to accommodate that growth. A scalable system is one whose performance improves after adding hardware.

Key Words: Energy Aware Load Balance, Server,

The two ways of performing the scaling are horizontal scaling and vertical scaling.

Creating Load, System Model, Cloud Computing.

1. INTRODUCTION

Horizontal scalability is the ability of increasing capacity by connecting multiple hardware or software entities so that they coordinate as a single logical unit. When the servers are clustered, the main original server is being scaled out horizontally. If a cluster requires more number of resources to enhance the performance and provide high availability (HA), an administrator can scale out by adding more number of servers to the cluster.

The concept of “load balancing” means exactly what the name denotes, to evenly distribute the workload to a set of servers to minimize the response time, maximize the throughput, and increase the system flexibility to errors by avoiding overloading the systems. An important method for reduction in energy is concentrating the load on a small subset of servers and switching the remaining of them to a state possessing low energy consumption whenever it is possible. This observation signifies that the traditional approach of load balancing in a large-scale system could be reevaluated as follows: distribute uniformly the workload to the smallest set of servers that are performing at optimal or near-optimal energy levels, while observing the Service Level Agreement (SLA) between the cloud user and CSP [1]. Idle and underutilized servers contribute to significantly more wastage of energy [2]. The energy efficiency is carried out by the ratio “Performance per watt of power”. Since the

© 2017, IRJET

|

Impact Factor value: 5.181

Vertical scalability besides, increases the capacity by adding more number of resources, such as more additional CPU or memory, to a machine. Scaling vertically that is scaling up, usually needs downtime beside new resources are being added and has limitation which are defined by hardware. Organization of paper: 1.Introduction, 2.Related work, 3.Proposed Work, 4. Implementation and Results, 5.Conclusion and Future Work.

|

ISO 9001:2008 Certified Journal

|

Page 479


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.