Sunday, January 31, 2010

Scala Programming Language – An Overview


The Scala programming language belongs to a class of programming languages known as ‘Functional Programming Languages’. Before we proceed further, let us have a quick re-cap of some the core concepts of functional programming languages, which we covered in an earlier blog post titled ‘Functional Programming – An Introduction’.

In Mathematics, ‘functions’ express the connection between parameters (inputs, in the case of computers) and the result (the output, in the case of computers) of certain processes. In each computation, the result depends on the parameters in a particular way and hence a ‘function’ is a good way of specifying a computation. This is the basis of ‘Functional Programming’.

The above notion is also more close to the ‘human world’ than to the world of a computer where in the initial days of computing, programs consisted of instructions to modify the memory, executed by the central processing unit. Thus, functional programming languages match the mathematical idea of functions.

A function is fundamentally a transformation. It transforms one or more inputs into exactly one output.

An important property of functions is that they yield no side effects – this means that the same inputs will always yield the same outputs, and that the inputs will not be changed as a result of the function. Every symbol in functional programming language is immutable.

Functional programming treats computations – running a program, solving a numeric calculation – as the evaluation of functions.

Having covered the key concepts of functional programming, let us move on to the industry scenario that led to the evolution of Scala programming language. Moore’s Law states that CPU speeds will double every 18 months. However, these days the focus is to create CPUs with multiple cores – meaning multiple CPUs within a single chip. This means that the multithreaded environment is executing on more than one CPU simultaneously as opposed to the standard ‘round-robin’ cycle executing on a single CPU. Multithreading on multiple CPUs requires that the code must be highly thread-safe. 


Attempts to resolve this problem of having highly thread-safe code has resulted in many new programming languages that addresses the concurrency problem, each language with its own virtual machine or interpreter. This obviously means that a transition to a new platform is required, similar to what happened when organizations moved from C++ to Java, about a decade ago. Such a transition is a non-trivial task and most companies consider another transition risk prone. This sets the stage for the arrival of Scala programming language.

Scala is a statically typed, object-oriented programming language. In addition to being object oriented, Scala is also a functional programming language and blends the best approaches to object-oriented programming and functional programming. Scala is designed and developed to run on the Java Virtual Machine (JVM) and Scala’s operational characteristics are same as Java’s. In fact, the Scala compiler generates byte codes that are nearly similar to that generated by Java compiler. This compatibility ensures that Scala language can utilize existing Java code, which in turn means that Scala has access to the existing ecosystem of Java code, including open-source code.

In Italian language, ‘Scala’ means stairway or steps. The name ‘Scala’ was selected to imply that Scala programming language allows programmers to ‘step-up’ to a programming environment that incorporates the latest in programming language design and at the same time letting programmers use all existing Java code. Scala also means ‘scalable language’, which means the language is designed to grow with the demands of its users.

Scala has been generating significant interest in the software industry and companies are announcing their move to Scala. Twitter, in April 2009, announced that they have switched a large portion of their backend to Scala and intend to convert the rest. Wattzon has mentioned that their entire platform has been written from the ground up in Scala.

Professor Martin Odersky is the creator of the Scala language. As a professor at EPFL in Lausanne, Switzerland, he is working on programming languages, more specifically languages for object-oriented and functional programming. His research thesis is that the two paradigms are two sides of the same coin, to be identified as much as possible. To prove this, he has experimented with a number of language designs, from Pizza to GJ to Functional Nets. He has also influenced the development of Java as a co-designer of Java generics and as the original author of the current javac reference compiler. Since 2001, Prof. Odersky has concentrated on designing, implementing, and refining the Scala programming language.

Before we conclude this discussion, I would like to quote a reference to Scala from a previous blog post, titled ‘Technology Choices for 2009 and Beyond...’ posted on 24 September 2008.
Another relatively new [first public release in 2003] language, Scala, designed and built by the team led by Prof. Martin Odersky (EPFL, Switzerland) [Prof. Odersky has also influenced the development of Java as a co-designer of Java generics and as the original author of the current javac reference compiler] also seems to be promising. On a related note, in the article titled "Java EE meets Web 2.0" written by Constantine Plotnikov, Artem Papkov and Jim Smith in developerWorks, November 2007), the authors identifies principles of the Java EE platform that are incompatible with Web 2.0 and introduces technologies, including Scala, that close the gap.

This concludes our discussion on Scala programming language, which is expected to transform software engineering, the way Java programming language did about a decade ago. 


~ Sunish

Sunday, January 24, 2010

Open Source Software and Enterprise Computing – An Introduction

If we ask the software fraternity to define ‘Open Source’ in one word, the answer will most likely be ‘collaboration’. To elaborate further, we can define ‘Open Source’ as public collaboration on a software project with contributors from across the globe.

The Open Source Initiative (http://www.opensource.org) provides a ten-point definition of open source, which can be summarized as follows. More information on each of these ten aspects of open source can be found at http://www.opensource.org/docs/definition.php.

1. Free redistribution
2. Source Code
3. Derived Works
4. Integrity of the Author’s Source Code
5. No Discrimination Against Persons or Groups
6. No Discrimination Against Fields of Endeavor
7. Distribution of License
8. License Must Not Be Specific to a Product
9. License Must Not Restrict Other Software
10. License Must Be Technology-Neutral

Some of the reasons that make ‘Open Source’ important are:

(a) A community process based approach, which influences the technical leadership to accommodate a collaborative approach.
(b) Open Source can be a major source of innovation, with collaborators beyond physical boundaries participating in open source projects
(c) Wide distribution and deployment of standards, which evolve from Open Source
(d) Increases choice and flexibility for enterprise customers.

We will focus the rest of this discussion on Open Source Computing and its adoption by Enterprises.

There is little doubt that Open Source Software is experiencing explosive growth and coupled with that growth, adoption of Open Source Software by enterprises is growing. Some of the factors that are prompting enterprises to adopt Open Source Software are:

1. Reducing IT budgets
2. Increasing Software Licensing Costs
3. Move toward Integrated Systems – one system for all Enterprise Users
4. Move to Web 2.0 initiatives to support marketing and enhance customer relationship management.

Given the above background, the key factors that push adoption of Open Source in Enterprises are:

(a) Cost: Reduced budgets obviously results in measures that will save costs. Overall Information Technology costs can be reduced by implementing free or low cost Open Source Software.
(b) Innovation: Open Source can be used to create new business offerings or innovative operation models, with substantial reduction in costs.
(c) Agility and Scale: Open Source Software provides the ability to quickly scale up and modify software systems to meet rapidly changing business requirements.
(d) No vendor lock-in: Reduces dependence on proprietary software vendors
(e) Quality and Security: Improves the operational efficiency of enterprise architecture by leveraging the open source characteristics of transparency and rapid improvement.

Some of the Open Source characteristics that make it particularly suitable and appealing to Information Technology organizations are:

1. Ability to inspect and modify source code: Open source mandates the availability of source code. This enables the enterprise adopters to inspect the source code to gain a better understanding of the software. It also helps in integrating Open Source Software with other systems. The ability to modify the source code enables enterprises to add new features and functionality. It also helps in adding security related modifications to meet the organization’s Information Security Audit requirements.

2. Development Transparency: Development Transparency means that the development process is carried out in public with all code changes available for inspection. It is relatively easy for a user to ascertain the current state and history of an open source product. Testing is also carried out on a large scale by collaborating developers, reported bugs are listed, and bug status maintained.

3. Liberal Licensing Terms: Proprietary Software licenses are restrictive in nature with limits on installations, simultaneous users (floating licenses), fixed number of users, etc., and obviously, there is a fee associated with such licenses. On the other hand, Open Source licenses are expansive in nature and encourage wide spread use (please see definition of Open Source at the start of this blog post). Open source licenses do not impose limits like fixed number of users and number of installations. Acquiring Open Source Software is also free. Service providers may charge fees for services like customization, security audit, testing, etc., but for accessing the software fees are not involved.

This concludes our discussion on ‘Open Source Computing and Enterprise Computing’. 


~ Sunish

Sunday, January 17, 2010

Functional Programming - An Overview

Let us start this blog post on ‘Functional Programming’ with a widely accepted definition of computer programming – “computer programming is the process of creating a sequence of instructions which will enable a computer to do something”. Computer programming is a means to translate problems in the real world that need solving, into a format that computers can process.

Computer programming languages help convey instructions to computers. The goal of programming languages is to translate human language to machine code, the native language that computers understand.

Before we move on to have an overview of functional programming, let us have a look at the different types (or paradigms) of programming languages. Please note that a given language is not limited to the use of a single paradigm, a classic case is that of Java programming language that has elements of both procedural and object oriented paradigms.

a) Procedural Programming Languages: These languages specify a list of operations that a program must execute to reach a desired state. Each program will have a starting state, a list of operations or instructions to complete and an ending state. Two popular examples of procedural programming languages are BASIC (Beginners All purpose Symbolic Instruction Code) and FORTRAN (The IBM Mathematical FORmula TRANslating System).

b) Structured Programming Languages: Structured programming can be considered as a special type of procedural programming, which requires the program to be broken down into small pieces of code, thereby increasing readability. Local variables (local to each subroutine) are preferred over global variables. These languages support a design approach called ‘top-down approach’ in which the design starts with a high-level overview of the system. System designers then add more details to the components in an iterative fashion until the design is complete. Popular languages include Pascal, Ada and C.

c) Object Oriented Programming Languages: This paradigm is the latest and considered the most powerful of all programming language paradigms so far. Here, system designers define both the data structures and the type of operations that can be applied to those data structures. This pair of data and operation(s) on the data is known as an object. A program can then be viewed as a collection of objects, which interact with one another. The important concepts associated with the object-oriented paradigm include classes/templates, inheritance, polymorphism, data encapsulation and messaging. However, a detailed note on these concepts is beyond the scope of our current discussion. Popular languages following this paradigm include Java, Visual Basic, C#, C++ and Python.

d) Functional and Other Programming Languages: The fourth list includes functional programming and other paradigms like concurrent programming and event driven programming, which is not included above.

We will now return to the focus of our discussion – Functional Programming.

In Mathematics, ‘functions’ express the connection between parameters (inputs, in the case of computers) and the result (the output, in the case of computers) of certain processes. In each computation, the result depends on the parameters in a particular way and hence a ‘function’ is a good way of specifying a computation. This is the basis of ‘Functional Programming’.

The above notion is also more close to the ‘human world’ than to the world of a computer where in the initial days of computing, programs consisted of instructions to modify the memory, executed by the central processing unit. Thus, functional programming languages match the mathematical idea of functions. Functional programming is a new approach to solve certain classes of problems, which we will cover later in this discussion.

The main characteristics of functional programming are as below:

(a) power and flexibility – many general, real world problems can be solved using functional constructs
(b) simplicity – most functional programming languages have a small set of key words and concise syntax for expressing concepts
(c) suitable for parallel processing – with immutable values and operators functional programs are more suited for asynchronous and parallel processing

Since the concept of ‘functions’ is core to Functional programming, let us define a function before we proceed further.

“A function is fundamentally a transformation. It transforms one or more inputs into exactly one output”.

An important property of functions is that they yield no side effects – this means that the same inputs will always yield the same outputs, and that the inputs will not be changed as a result of the function. Every symbol in functional programming language is immutable.

Functional programming treats computations – running a program, solving a numeric calculation – as the evaluation of functions.

Some of the classes of problems, which can benefit from a functional programming approach, are as below:

(i) multi-core and multi-threaded systems
(ii) sophisticated pattern matching
(iii) image processing
(iv) machine algebra
(v) lexing and parsing
(vi) artificial intelligence
(vii) data mining

Advantages of Functional Programming

(a) Unit Testing: We have already noted that every symbol in a functional programming language is final and hence immutable. This implies that no function can modify variables outside of its scope and hence there are no side effects caused by functions. This also implies that the only effect of evaluating a function is its return value and the only thing that affects the return value of function is its arguments (Please see the definition of ‘function’ above). This makes unit testing much easier since the boundary values of arguments need only be unit tested.

(b) Debugging: The absence of side effects as explained at (a) above makes debugging easier since bugs are local to a function. An examination of the stack quickly reveals the cause of error.

(c) Concurrency: Functional programming does not allow data to be modified by two different threads or twice by the same thread. Hence, there is no scope for deadlocks and race conditions. This allows ease of programming in concurrent systems.

Apart from being a more appropriate tool for certain classes of computing problems, functional programming also allows programmers to make more efficient use of multi-core systems, develop concurrent/parallel algorithms easily and utilize the growing number of cloud computing platforms.

Functional programming is also considered as a means for programmers to improve their problem solving skills; it also allows programmers to look at problems from a different perspective and become more insightful object-oriented programmers as well.

Popular functional programming languages include LISP, Haskell and F#. 


~ Sunish

Monday, January 11, 2010

Scratch – A route to fluency with new technologies


In the current scenario across the globe where technology is an integral part of our lives, the younger generation is often referred to as ‘Digital Natives’ because of their apparent fluency with digital technologies. Please note the use of the expression ‘apparent fluency’. This is because although young people are comfortable sending text messages (SMS), playing online games and browsing the web, such activities do not seem to make youngsters ‘fluent’ with digital technologies in the real sense of the word. To reiterate, despite the constant interaction of young people with digital media, few of them can create their own games, animations or simulations. In short, if digital technology is considered as a language, it is as if youngsters can “read” the language, but cannot “write” or express themselves using digital technologies.

This set the stage for the Scratch team that created the Scratch programming language. When the Scratch team started off in 2003 to create the language they had set a goal to develop an approach to computer programming that would appeal to people who had not previously imagined themselves as computer programmers. The team’s aim was to make it easy for everyone, of all ages, backgrounds, and interests, to program their own interactive stories, games, animations and simulations; and to share their creations with other programmers.

The Scratch programming language was released to the public in 2007 and since then the Scratch website (http://scratch.mit.edu) has become a very active online community where people share, discuss and remix scratch programming projects. The collection of projects is quite diverse - birthday cards, video games, interactive tutorials, virtual tours and many others, all programmed in Scratch programming language. The core audience on the Scratch website is between the ages of 8 and 16 though there is a sizeable group of adult participants as well.

As users of the Scratch website program and share interactive projects, they:

1.    learn mathematical and computational concepts
2.    learn to think creatively
3.    reason systematically and
4.    work collaboratively

The above skills are often considered essential skills for the twenty first century. In fact, the primary goal of the team that created Scratch was not to prepare people for careers as professional programmers, but rather to nurture the development of a new generation of creative, systematic thinkers who are comfortable using programming to express their ideas. Further, digital fluency requires not just the ability to chat, browse and interact, but also the ability to design, create and invent with new media.

When personal computers were first introduced in the early 1980s, there was a lot of enthusiasm for teaching all children how to program. The commonly used languages were LOGO [Logic Oriented Graphic Oriented] or BASIC [Beginners All Purpose Symbolic Instruction Code]. (My school taught computer programming in 1988 in BBC BASIC, a variant of BASIC for BBC Microcomputers).

The main factors that prevented the initial enthusiasm from being long lasting were:

1.    Difficulty in mastering the syntax of programming
2.    Programming based on scientific/mathematical activities that did not generate enough interest in children

Based on these past programming initiative experiences, the Scratch team established three core design principles for Scratch:

1.    more tinkerable
2.    more meaningful
3.    more social
 

1.    More Tinkerable: The Scratch grammar is based on a collection of graphical “programming blocks” that children snap together to create programs. Connectors on the blocks suggest how they should be put together. Children can start by tinkering with the blocks, snapping them together in different sequences and combinations to see what happens. There is none of the obscure syntax or punctuation of traditional programming languages. It is easy to get started with and the experience is playful.

Figure1: Sample Scratch Scripts

Scratch blocks are shaped to fit together only in ways that make syntactic sense. Control structures like ‘forever’ and ‘repeat’ are C-shaped to suggest that blocks should be placed inside and to indicate the concept of scoping. Blocks that output values are shaped according to the types of values they return: ovals for numbers and hexagons for Booleans. Conditional blocks (if and repeat-until) have a hexagon shaped voids, indicating a Boolean is required.

2.    More Meaningful: It is widely accepted that people learn best, and enjoy it most, when they are working on personally meaningful projects. While developing Scratch the team had attached a high priority on:

a.    diversity – supporting many different types of projects such as stories, games, animations, simulations, etc., so that people with widely varying interests can all work on projects that they care deeply about.
b.    personalisation – making it easy for people to personalize their scratch projects by importing photos and music clips, recording voices, creating graphics.

3.    More Social: The development of the Scratch programming language has been tightly coupled with the development of the Scratch website. From the Scratch team’s perspective, for Scratch to succeed, it had to be linked to a community, where people could support one another, collaborate with one another, critique on one another and build on one another’s work. The concept of sharing is built right into the Scratch User Interface, with a prominent Share menu and icon at the top of the screen, which allows the project to be uploaded to the Scratch website. Once a project is on the website, anyone can run the project within a browser, comment on the project, and vote for the project or download the project to view and revise the scripts. All projects shared on the website are covered by Creative Commons license.

Looking at future directions of Scratch programming language, following are few of the major directions in which the project will be moving ahead:

1.    More tinkerable, More Meaningful and More Social
2.    Scratch Sensor Board – for interacting with the physical world
3.    Scratch for mobile devices
4.    Web based version of Scratch
5.    Scratch-Ed – for Scratch educators; to share ideas, experiences and lesson plans

This brings us to the end of this blog post on Scratch, which is on a mission to expand the notion of digital fluency.

Thanks for your interest and for reading this blog post.


~ Sunish

Wednesday, January 06, 2010

Cloud Computing - Part Two


In part two of this two part blog post on cloud computing, we will cover:

1. Concerns related to cloud computing
2. Factors which can accelerate wide spread adoption of cloud computing

1. Concerns related to cloud computing

(a) Security: One of the biggest concerns related to cloud computing is security. This is because sensitive data may no longer reside on dedicated hardware, secured within the enterprise’s own data centers. If the cloud is not secure enough enterprises will hesitate to migrate their business related data to the cloud platform.

(b) Poor Service Level Agreements: Service Level Agreement (SLA) is an integral part of the business relationship between a service provider and a customer. An SLA is essentially a contract between a service provider and a customer which clearly defines the business relationship, assures the customer that the service will meet stated requirements, and provides contingencies in case issues arise.

Due to poor or non-existent Service Level Agreements, cloud computing confidence and adoptions is affected. Most Enterprise IT organizations will not adopt cloud services on a large scale until service levels can be clearly spelled out and backed up. For many IT organizations Service Level Agreements are a requirement to use any vendor’s service since the absence of an SLA puts the business at risk from an operational, financial, or liability standpoint.

The main issues commonly found in cloud computing related Service Level Agreements are:

•    Lack of guaranteed availability
•    Lack of guaranteed performance
•    Lack of guaranteed support and response time


(c) Inadequate Risk Assessment: Risk Assessment and Management is often considered the greatest concern in cloud computing. Risks associated with cloud computing can be generally classified into:

  (i) Legal, compliance and reputation risks
  (ii) Operational risks

Legal, compliance and reputation risks can result from cloud computing vendors leaking, losing, breaching, damaging or impeding access to various types of sensitive or valuable information. When information is leaked, damaged, or lost by a cloud computing vendor, the customer organization may face legal or regulatory consequences for which there is little recourse. Cloud customers are unlikely to repair the reputation damage by transferring the responsibility to the cloud vendor.

The majority of the operational risks for cloud computing services are related to IT security, performance or availability. Small to medium sized organizations could see a net gain operational security by using a professional cloud computing service. However, larger enterprises may see lower levels of security in the areas of strong encryption, access control, monitoring and physical separation of resources.

(d) Vendor Lock-in: Vendor lock-in is a real and major concern in cloud computing. The factors that lead to vendor lock-in are:

    (i) Lack of interoperability between cloud services
    (ii) Inability to migrate to other cloud services
    (iii) Vendor management limitations at the customer’s end

(e) Management Issues: There are two management issues often associated with cloud computing – performance monitoring & troubleshooting and data management. Many cloud computing service providers do not provide adequate tools for performance monitoring. Many vendors also do not have the ability to effectively trouble shoot when issues arise. Similar to performance monitors some vendors do not provide tools for meta-data manipulation or extraction of data.

2. Factors which can accelerate wide spread adoption of cloud computing

(a) Expenditure and ROI: As mentioned in part one of this post, cloud computing enables customers to defer large capital expenditure. This will probably be the biggest factor which will drive the wide spread adoption of cloud computing. The current model is to buy as much infrastructure as is needed to meet estimated peak capacity and in most cases this results in under-utilized IT resources. Cloud computing offers the ability to scale up and scale down as per demand and a pay-as-you-go business model where the customer pays only for the services actually used. In financial terms, this translates into less capital expenditure and more operational expenditure. The advantage of operational expenditure is that it can be fine tuned as per need, thereby resulting in more efficient utilization of financial resources and better return on investment (ROI).

(b) Wide spread Mobile Internet Access: It is fair to assume that in another 5 to 6 years, significant progress will be made in the field of Internet connectivity resulting in the ability to connect to the Internet at all places where it is possible to connect to a mobile telecommunication tower. Further, the spread of 4G wireless standards will bring broadband Internet access to remote locations and will introduce true broadband connectivity to automobiles, trains and even commercial aircrafts. This will boost cloud computing acceptance as internet access is a pre-requisite most for cloud computing models. Another factor which will help acceptance of cloud computing is the availability of smart phones and net-books which help mobile users connect to the Internet.

(c) Offline Access for Online Applications: Google Mail or GMail is a commonly cited example where an online application is available for offline use when there is no Internet connectivity. This allows the user to continue working while being disconnected from the online application, hosted on a cloud computing platform. On restoration of Internet connectivity changes made to the offline version are synchronized with the online version of the application. For cloud computing applications, this means that Internet connectivity is not always required for users to work with the application.

(d) Separation of Data from Applications: In application development, it is becoming increasingly common practice to separate data from applications. For enabling users to connect with minimum of system pre-requisites, application front ends are being delivered via web pages which can be accessed from any browser. The backend is maintained separately, powered by highly scalable databases. Factors like WAN (Wide Area Network) speeds of over 100 Mbps, decreasing bandwidth costs and WAN acceleration technologies will assist the separation of data and applications.

This concludes part two of this two part blog post on “Cloud Computing”.


~ Sunish




Cloud Computing - Part One

Cloud computing, which extends the enterprise beyond the traditional data center walls is quietly winning over CIOs across the world. Cloud computing not only offers a viable solution to the problem of addressing scalability and availability concerns for large-scale applications, but also displays the promise of sharing resources to reduce cost of ownership. The concept has evolved over the years starting from data centers to present day infrastructure virtualization.  Although Cloud Computing is bringing about major changes in the way traditional IT infrastructure is being managed, it is still not mature enough for wide spread adoption in the IT industry.

We will try and look at a few aspects of Cloud Computing such as:

1.    What is cloud computing?
2.    Advantages of cloud computing
3.    Concerns related to cloud computing
4.    Factors which can accelerate wide spread adoption of cloud computing

In part one of this two part blog post we will cover:

1. What is cloud computing?
2. Advantages of cloud computing

1. What is Cloud Computing?

A commonly found definition of cloud computing is:  


A set of disciplines, technologies, and business models used to render IT capabilities as on-demand services.

A frequently asked question is about the origin of the term ‘cloud’. In most documents related to the internet it is common practice to represent the internet as a diagrammatic representation of a cloud, due to the distributed nature of internet. Cloud computing also has a similar distributed nature and hence the term ‘cloud’ was adopted.

Cloud computing is also often referred to as ‘the cloud’.

The common characteristics of cloud computing includes:

(a) Shared Infrastructure: As per the cloud business model, the cloud service provider invests in infrastructure necessary to provide software, platforms and related infrastructure, as a service to multiple consumers. Hence the service providers have a financial incentive to leverage the infrastructure across as many consumers as possible. 


(b) On-demand self-service: On-demand self-service is the cloud customer’s ability to purchase and use cloud services as per need. For example, as the number of users supported by the customer's application increases, the customer can add more storage space or processing power as per need. When the enhanced computing power/storage is no longer needed, the customer can scale down as well. Thus the cloud computing’s ability to quickly provision and deprovision IT services creates an elastic and scalable IT resource. It is a pay-as-you-go model where the customers pay only for the services that they actually use.

As an added advantage, it is also possible for cloud vendors to provide an application programming interface (API) that enables the customer to programmatically (or automatically through a management application) scale-up or scale down cloud services.

(c) Consumption based pricing model: As explained at (b) above, the customers pay only for the services they actually use, resulting in per hour or per GB (Gigabytes) prices. For example, CPU (Central Processing Unit; refers to computing power) time can be billed in minutes or an hour during which the CPU is actually is in use. Data storage can be charged on the basis of GB stored. Data Transfer also can be billed on the basis of MB (Megabytes) or GB. In practice, it is also common for vendors to vary the pricing model for data storage and data transfer based on the geographic proximity of customers to the vendor’s data centers.

2. Advantages of Cloud Computing

Some of the key advantages of cloud computing can be listed as below:

(a) Simplifies and optimizes IT resources: In the current IT scenario, many organizations own and operate all of the IT resources for meeting their business objectives. Such organizations are often forced to install, maintain and upgrade complex solutions integrating different applications, operating systems, servers, networks and storage, to meet ever growing business needs. This drives up the IT operational costs and prevents IT organizations from focusing on strategic business initiatives. This in-house management of IT resources also results in large capital expenditures which return little value to the business. 


In future, as cloud computing gains acceptance, organizations can reduce the size and complexity of internal IT operations by shifting non-strategic, but essential IT resources to a cloud computing platform. Internal IT resources can then focus on more important, higher level projects which can drive core business initiatives.

(b) Cuts costs and moves CAPEX to OPEX: Complex internal IT infrastructures consume a lot of electric power and also need operational personnel to monitor and manage expensive and underutilized IT equipment on a 24x7 basis. Also, in the case of some business scenarios, highly intensive computing power and storage capacity is required only for a few hours or days per month. Capital expenditure (CAPEX) is often more tightly controlled by finance departments than operational expenditures (OPEX). 


Moving to the cloud helps IT organizations release the work-load on their already strained data centers. Cloud computing’s on-demand, consumption based pricing model can help IT organizations defer large capital expenses or even avoid costs altogether. 

Another classic case is the Test Hub that software development companies employ to simulate real-world scenarios. In Test Hubs, the IT resource configurations are often much larger and complex then typical development environments. Cloud computing provides a quick and cost-effective way to boost computing power and data storage to simulate real world scenarios in Test Hubs.

Since cloud computing expenses get classified as operational expenditure there is less budgetary controls as explained above. 

(c) Improved IT Resource Management: IT resource procurement model in typical organizations is often an inefficient supply chain. The procurement cycle starts with System Administrators predicting and factoring usage patterns into buying decisions to ensure sufficient capacity to satisfy growth over time. The procurement process should also allow for contingencies like delayed delivery of equipment, non-working equipment delivered, slow budgetary approvals and poor forecasting. In effect, more resources than is needed are purchased and the operating resources are underutilized. 
 
Cloud computing’s on-demand, pay-as-you-go consumption based procurement model enables IT organizations to efficiently mange their IT resources and ensure better return on investment.

(d) Inexpensive Disaster Recovery: Building data centers with enough redundancy for disaster recovery can be an expensive proposition. Using an out of the region co-location facility is also difficult with out incurring high costs. Hence many organizations have poorly tested or even non-existent disaster recovery plans.

Here again, cloud computing services provides a viable alternative to increase business continuity by disaster planning without incurring the high costs as mentioned above.

This concludes part one of this two part blog post on “Cloud Computing”. In part two of this post, we will look at:

3.Concerns related to cloud computing
4.Factors which can accelerate wide spread adoption of cloud computing

~ Sunish