Startertutorials Blog
Tutorials and articles related to programming, computer science, technology and others.
Subscribe to Startertutorials.com's YouTube channel for different tutorial and lecture videos.

Categories: Cloud Computing. No Comments on Research Areas in Cloud Computing
0
(0)

In this article we will learn about the different research areas in cloud computing which will help budding researchers to purse their research.

 

Introduction

According to IEEE Cloud 2015 [1], following are the various research areas in cloud computing:

  • Cloud Computing Architectures and Cloud Solution Design Patterns
  • Infrastructure, Platform, Application, Business, Social, and Mobile Clouds
  • Storage, Data, and Analytics Clouds
  • Self-service Cloud Portal, Dashboard, and Analytics
  • Security, Privacy, and Compliance Management for Public, Private, and Hybrid Clouds
  • Cloud Quality Management and Service Level Agreement (SLA)
  • Cloud Configuration, Performance, and Capacity Management
  • Cloud Workload Profiling and Deployment Control
  • Cloud Software Patch and License Management
  • Cloud Migration, Cloud Composition, Federation, Bridging, and Bursting
  • Cloud Resource Virtualization and Composition
  • Cloud Provisioning Orchestration
  • High Performance Cloud Computing
  • Cloud Programming Models and Paradigms
  • Autonomic Business Process and Workflow Management in Clouds
  • Cloud DevOps
  • Green Cloud Computing
  • Innovative Cloud Applications and Experiences
  • Economic, Business and ROI Models for Cloud Computing
  • Cloud Computing Consulting

The above list of research areas in cloud computing is not an exclusive list but only a candidate list.

 

Research Challenges in Cloud Computing

Although research in cloud computing has gained momentum in academic and industrial circles it still is in an infancy stage. Many existing issues have not been fully addressed. Some of the challenging research issues in cloud computing [2] are mentioned below:

Automated Service Provisioning


Subscribe to our monthly newsletter. Get notified about latest articles, offers and contests.


One of the key features that cloud computing provides is the ability to acquire and release resources on-demand. This is done by the cloud provider (service provider) and it is not an easy task. The objective of service provider is to allocate and deallocate resources from the cloud to satisfy its Service Level Objectives (SLOs), while minimizing its operational costs. In a public cloud this becomes even more challenging issue as there are thousands or even more people accessing the same infrastructure.

Virtual Machine Migration

Virtual machine migration can provide significant benefits in cloud computing which enables load balancing across the data center. In addition, virtual machine migration enables robust and highly responsive provisioning in data centers.

Major problems in VM migration are detecting workload hotspots and the consistent transfer of in-memory state efficiently, with integrated consideration of resources for applications and physical servers.

Server Consolidation

Server consolidation is an effective approach to maximize resource utilization while minimizing the energy consumption in a cloud computing environment. Live VM migration technique is often used to consolidate VMs on several under-utilized servers on to a single server, so that remaining servers can be placed in an energy-saving state.

Server consolidation process should not hurt the application performance. It is known that the resource usage (footprint) of individual VMs may vary over time. It is sometimes important to observe the fluctuations in VM footprints and use that information for effective server consolidation. Also the system must react quickly to resource congestions that might be caused due to VM consolidation.

Energy Management

Improving energy efficiency (saving energy) is another challenging issue in cloud computing. It has been estimated that the cost of powering and cooling the servers in a data center accounts for 53% of the total expenditure of a data center. In 2006, data centers in US consumed more than 1.5% of the total energy generated in that year. The goal is not only to cut down energy consumption in data centers but also to meet the government regulations and environment standards.

Traffic Management and Analysis

Analysis of data traffic is important for data centers and users of cloud computing. Many web applications rely on analysis of data traffic to optimize customer experience. Network operators also need to know how traffic flows through the network in order to make many of the management and planning decisions.

There are several challenges to extend the existing traffic measurement and analysis methods in ISP networks and enterprise networks to data centers. First is density of links is much higher than in ISP networks. Second is most existing traffic analysis methods are limited to few hundreds of hosts where as a typical data center might contains thousands of hosts. Third is existing methods assume flow patterns that are reasonable in ISP networks, where as in data centers, applications might contains MapReduce jobs which might significantly change the traffic patterns.

Data Security

May be the most popular among all the research challenges in cloud computing at present is the security of data in the cloud. Cloud providers or service providers in general do not have any control over the physical infrastructure. So, the responsibility of the data lies within the hands of infrastructure providers.

In the case of Virtual Private Cloud (VPC) even though the user of VPC can specify security settings, we can exactly say whether they are being really implemented or not. The infrastructure provider must achieve the following objectives: 1) Confidentiality and 2) Auditability. Confidentiality is usually achieved using cryptographic protocols and auditability can be achieved using remote attestation techniques like Trusted Platform Module (TPM).

Software Frameworks

Cloud computing provides a compelling platform for hosting large-scale data-intensive applications which leverages MapReduce frameworks such as Hadoop. Due to several reasons it is possible to optimize the performance and cost of a MapReduce application by carefully selecting its configuration parameter values and designing more efficient scheduling algorithms.

Storage Technologies and Data Management

Software frameworks such as MapReduce and its various implementations like Hadoop and Dryad are designed for distributed processing of data-intensive tasks. These frameworks operate on Internet-scale file systems like GFS and HDFS which are different from traditional file systems. This may lead to compatibility issues with legacy file systems and applications.

Novel Cloud Architectures

Most of the commercial clouds are hosted in data centers that are operated in a centralized way. Although this design has its own advantages, it also comes with its limitations like high energy expense and high initial investment for constructing data centers. One solution to build multiple smaller sized data centers which are easier and cheaper to build. These small data centers may be scattered across the globe in various locations. Such data centers might provide the essential feature Geo-diversity which is desirable for certain time-critical applications like content delivery (video) and interactive gaming.

References

[1] IEEE Cloud 2015 – http://www.thecloudcomputing.org/2015/index.html

[2] Q. Zhang, L. Cheng, and R. Boutaba, “Cloud computing: State-of-the-art and research challenges,” J. Internet Serv. Appl., vol. 1, no. 1, pp. 7–18, 2010.

[3] I. Sriram and A. Khajeh-Hosseini, “Research Agenda in Cloud Technologies,” in 1st ACM Symposium on Cloud Computing, SOCC, 2010, vol. cs.DC, pp. 1–11.

How useful was this post?

Click on a star to rate it!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Suryateja Pericherla

Suryateja Pericherla, at present is a Research Scholar (full-time Ph.D.) in the Dept. of Computer Science & Systems Engineering at Andhra University, Visakhapatnam. Previously worked as an Associate Professor in the Dept. of CSE at Vishnu Institute of Technology, India.

He has 11+ years of teaching experience and is an individual researcher whose research interests are Cloud Computing, Internet of Things, Computer Security, Network Security and Blockchain.

He is a member of professional societies like IEEE, ACM, CSI and ISCA. He published several research papers which are indexed by SCIE, WoS, Scopus, Springer and others.

Leave a Reply

Your email address will not be published. Required fields are marked *