Amazon AWS 19th Price Reduction – a Closer Look

If you have been following me regularly then you would know that I maintain that charges for cloud services are not coming down in proportion to the routine reduction of hardware cost. So, let us examine in Amazon has reversed the trend with its latest announcement of price reduction. On 6th March, 2012 Amazon announced:

“…a reduction in Amazon EC2, Amazon RDS, and Amazon ElastiCache prices. Reserved Instance prices will decrease by up to 37% for Amazon EC2 and by up to 42% for Amazon RDS across all regions. On-Demand prices for Amazon EC2, Amazon RDS, and Amazon ElastiCache will drop by up to 10%. We are also introducing volume discount tiers for Amazon EC2, so customers who purchase a large number of Reserved Instances will benefit from additional discounts. Today’s price drop represents the 19th price drop for AWS**, and we are delighted to continue to pass along savings to you as we innovate and drive down our costs…”

**since the launch of the service – (my addition)

How much of this (37%, 42%, 19th etc.) is sales talk and how much of it is reality? Let me just present the data that I have collected.

Have a look at the data you be the judge.

[Update: Why am I not surprised that Google drops the price of Cloud Storage service within a week?]

[Update: This was going to happen and it has happened within 10 days! Microsoft Trying Hard to Match AWS, Cuts Azure Pricing]

How does the current price compare with what existed in January, 2010

I had taken a dipstick of the prevailing AWS prices on Jan-10 which you can check here. There is no doubt that the breadth and the depth of offering has increased significantly, but we can still compare the price of those offering which existed then.

Here is a comparison between prices then and price now.

Jan-10 Mar-12
On-Demand Instances Small – Linux – N. Virginia $0.085 $0.080
Quadruple Extra Large – Windows – N. California $3.160 $2.504
Data Transfer – In Free till June 2010 Free
Data Transfer – Out Per GB depending on the total monthly volume $0.1 to $0.17 $0.05 to $0.12
Storage (EBS) Per allocated GB per month $0.10 $0.10
I/O Requests Per million I/O $0.10 $0.10

Price Reduction in last 3 years

I could locate 12 instances of price reduction announcement in last 3 years. Except for the current announcement all other have lowered price on only one element of AWS. Here is the summary:

  • On-demand instance: 2 Times (Mar-12 & Oct-09)
  • Reserved Instances: 2 Times (Mar-12 & Augt-09)
  • Storage: 3 Times (Mar-12, Feb-11 & May-10)
  • Data Transfer: 2 Times (Jul-11 & Feb-10)
  • Cloud Front: 2 Times (Jul-11 & Jun-10)
  • Cloud Watch: 1 Time (May-11)
  • Premium Support: 1 Time (Jan-11)

Here is the full list:

New, lower pricing for Amazon EC2, RDS, and ElastiCache Mar-12 Reduction in Amazon EC2, Amazon RDS, and Amazon ElastiCache prices. Reserved Instance prices will decrease by up to 37% for Amazon EC2 and by up to 42% for Amazon RDS across all regions. On-Demand prices for Amazon EC2, Amazon RDS, and Amazon ElastiCache will drop by up to 10% All
New Lower Pricing Tiers for Amazon CloudFront Jul-11 Lowered prices for Amazon CloudFront — we’ve added new usage tiers in every region, and in the US and Europe we’ve reduced data transfer pricing in every existing tier Cloud Front
AWS Lowers Data Transfer Costs – Effective May 1 Jul-11 We will no longer charge a separate fee for internet data transfer in. For internet data transfer out, in the US and Europe we’ve reduced the price at every existing usage tier and in all regions Data Transfer
Amazon CloudWatch Announces Custom Metrics, Lower Prices for Amazon EC2 Monitoring May-11 We are lowering the price of existing Detailed Monitoring for Amazon EC2 instances by 68% to $3.50 per instance per month Cloud Watch
Amazon S3 announces new lower prices for standard storage Feb-11 All Amazon S3 standard storage customers will see a reduction in their storage costs. For instance, if you store 50 TB of data on average, you’ll see a 12% reduction in costs, and if you store 500 TB of data on average, you’ll see a 13.5% reduction in costs Storage
AWS Introduces New Premium Support Plans, Lowers Usage Prices by 50% on Existing Plans Jan-11 Usage pricing on existing Premium Support Gold and Silver offerings by 50% Premium Support
Amazon CloudFront Adds HTTPS Support, Lowers Prices, Opens NYC Edge Location Jun-10 Reduced our pricing for regular HTTP requests by 25%: prices for HTTP requests now start at $0.0075 per 10,000 requests Cloud Front
New Lower Prices for High Memory Double and Quadruple XL Instances May-10 Lowered the On-Demand and Reserved prices for High Memory Double Extra Large (m2.2xlarge) and Quadruple Extra Large (m2.4xlarge) DB Instances Storage
AWS Announces Lower Pricing for Outbound Data Transfer Feb-10 Lowering AWS pricing for outbound data transfer by $0.02 across all of our services, in all usage tiers, and in all Regions Data Transfer
Announcing Lower Amazon EC2 Instance Pricing Nov-09 Lowering prices up to 15% for all On-Demand instance families and sizes On-Demand Instances
New Lower Price for Windows Instances with Authentication Services Oct-09 Removed the distinction between Amazon EC2 running Windows and Amazon EC2 running Windows with Authentication Services On-Demand Instances
New Lower Prices for Amazon EC2 Reserved Instances Aug-09 Lowered the one-time fee for all Amazon EC2 Reserved Instances by 30% Reserved Instances

Amazon AWS pricing details

Here is a Snapshot of the pricing as it exists now (March, 2012)

I will be able to use this in future for comparison!

Big Data – Is it a solution in search of a problem?

If you look at the predictions made for 2012, you will find a new entry which was not there last year. Be it Gartner, Forrester or McKenzie  – “Big Data” finds a place in the prediction.

So, what is big data? Is it the next path breaking technology which will change everything or is it just a hype which will die down after sometime?

Let us take a realistic look at what the term big data mean and what problem it can solve.

What is “Big Data”?

(The Wikipedia page on Big Data is not that good. The clearest explanation I have found is from O’Reilly Radar – here is the link)

Here is a short explanation.

Big Data is the name given to the classes of technologies that needs to be used when your data volume becomes so much that the RDBMS technologies can no longer handle it.

Big data spans three dimensions (taken from this article of IBM):

  • Variety – Big data extends beyond structured data, including unstructured data of all varieties: text, audio, video, click streams, log files and more.
  • Velocity – Often time-sensitive, big data must be used as it is streaming in to the enterprise in order to maximize its value to the business.
  • Volume – Big data comes in one size: large. Enterprises are awash with data, easily amassing terabytes and even petabytes of information.

In short – if your data volume can be handled efficiently by RDBMS you NEED NOT worry about Big Data.

How did it all start?

With the advent of cloud computing which provided easy access to massive amount distributed computing power there was a realization RDBMS cannot be effectively parallelized. In fact CAP theorem states that Consistency, Availability & Partition Tolerance cannot simultaneously be guaranteed. This led to a No-SQL movement and multiple non-relational databases sprang up.

Trigger Point of Big Data happened when Google published the paper on the “Map-Reduce” algorithm. It involves processing of highly distributable problems across huge datasets using a large number of computers. Map-Reduce is at the heart of Google’s search engine.

Takeoff happened when Apache open source “Hadoop” project which created its own implementation of Map-Reduce. The largest Hadoop implementation is probably at Facebook.

In short: Big Data requires large DISTRIBUTED processing power.

Why would you want to process so much data?

There are 3 basic assumptions which are driving the big data movement:

  1. Faster analysis of larger operational data will help you make better decision
  2. More in-depth analysis of customer data will guide you to better customer segmentation
  3. Insight into larger data set will help you come up with innovative product design

Companies that have successfully leveraged this are Google, Facebook, Amazon, Walmart, Yahoo etc.

In short – the ASSUMPTION is that more data and faster analytics will lead to more innovation and better decision making.

3 Prerequisites for leveraging Big Data

Let us assume that your data volume is large enough and you have access to enough distributed processing power. Will that be sufficient for you to venture into big data?

No … you need three more things.

  1. Business problem which you think that the data at your disposal can help to resolve
  2. Set of questions to be answered through data analysis
  3. Algorithm to analyze the data – this is the domain of the new field Data Science

Big Data will be useful only if you are equipped with all these.

Therefore, for most of us, Big Data is a solution which is in search of a problem.

Trend in Cloud Computing Adoption – 2012

What can we expect from cloud computing in 2012? Where will cloud computing be one year from now?

  1. Basic premise of (1) economy of scale, (2) pay what you use and (3) better utilization through sharing will remain intact – though some reports challenging the extent of cost saving will emerge.
  2. Amazon will extend its lead over others with the most comprehensive offering on IaaS – competitors will try to carve out their own niche.
  3. Google will not make much headway in the enterprise segment – perpetual beta does not gel with enterprise.
  4. Microsoft will do just enough on office suite to keep competition at bay – but not too much to cannibalize its core office business.
  5. Same will happen with major ERP vendors – they will make just enough noise but stop short on cannibalizing their core business.
  6. Every vendor will look for a pie in the private & hybrid cloud – but the actual adoption will be very low the talk will shift to governance being the key.
  7. Critical concerns (both real and perceived) like (1) security, (2) privacy, (3) SLA and (4) compliance also remain – like credit card usage on net objections will slowly go away – but tipping point will not be 2012.

Do you agree with these points?

Actually, this is what I had predicted for 2011 and these points look perfectly valid for 2012.

Would I want to add any other point to this list? I don’t think so.

Cloud Computing Adoption progressing at a Snail’s Pace

If you look back at the important cloud computing events you will find that nothing of much significance had happened in 2010. The same can be said for the 2011 and I suspect that 2012 will not be any different.

But, one thing has changed during the 2011.

Neither cost saving nor flexibility is the primary driver for cloud adoption

There is clear indication that mobility has become the prime reason for cloud adoption.

Here are the results of two surveys:

  1. IBM: 51% of respondents stated that adopting cloud technology is part of their mobile strategy.
  2. CSC: 33% adopted cloud primarily for accessing information from any device as against only 17% who adopted for cost saving.

The implication is that cloud computing is becoming an enabler for mobility and mobility is the big thing. Cloud computing becomes a means to an end.

What will the implication be?

  • Budget will get allocated for mobility and not for cloud computing though people will use cloud to achieve mobility.
  • Mobility solutions will include a cloud component rather than a cloud solution with mobility component.

Amazon, Google and Microsoft

Amazon continuous to lead in the IaaS with more offering and more availability zones – it is also trying to get into PaaS.

Microsoft still continues to do just enough on office suites to keep competition at bay – it is fighting a battle of survival in the mobile and tablet space.

Google has still not made much headway into the enterprise – in spite of changing direction in many ways.

  • It has a new CEO.
  • It has closed down Google Labs.
  • It has a reasonable successful launch of social media platform.
  • It discontinued Google App Engine for Business.
  • It has modified its search algorithm to incorporate social data.

On the whole, as far as cloud computing is concerned, there is hardly any change.

What about Big Data?

Most analysts have proclaimed that “Big Data” is the next big thing. Big data without cloud computing is difficult to imagine.

  • Is Big Data part of cloud or is it part of analytics?
  • Is it to be treated as a separate category?
  • Or, is it a solution in search of a problem?

It is obvious that application of big data is limited to few specific set of problems. The key point we need to remember is that big data will not be of any use unless you are ready to ask the right question – but that is a separate topic.

Finally…

For everything to go into cloud and for us to access it from any device from anywhere we need wireless bandwidth. Do we have enough of it?

Look at some of these stats (picked up from this article):

  • In 2011 October, number of wireless devices in the U.S. exceeded the number of people.
  • By 2014, voice traffic will comprise only 2 percent of the total wireless traffic in the United States.
  • Smartphones consume 24 times more data than old-school cell phones, and tablets consume 120 times more data than smartphones.
  • Mobile networks in North America were running at 80 percent of capacity.
  • With advancements in connected cars, smart grids, machine-to-machine (M2M) communication, and domestic installations such as at-home health monitoring systems, wireless demands will only increase.

Will cloud computing hit a road block of limited wireless bandwidth?

Proposed Google App Engine pricing demonstrates the hidden danger of cloud

In May this year, Google had announced that GAE is almost ready to graduate from Preview status and become an official Google product.

Three things have been established.

  1. GAE has lost its distinctiveness and becoming a “me too” offering. It is getting out of the CPU based pricing to instance based pricing.
  2. The change in the pricing model has made a mockery of the architectural decision of many developers.
  3. It is proved that market forces and not cost of providing the service is going to decide cloud pricing.

GAE has lost its distinctiveness

I had earlier thought the GAE was potentially disruptive – but I was wrong!

Two of the most disruptive elements (CPU cycle based pricing & No SQL) has been abandoned. Only one disruptive element remains. You don’t need any additional software to deploy application in GAE – no web server, no app server and no database.

Architectural decision and cloud pricing

When you build a cloud application and optimize it for the amount of service charges you pay the provider, you need to juggle between CPU utilization, data storage, read-write and I/O bandwidth used.

If you had designed your application to take advantage of the GAE pricing model – you may be in for a rude shock. You can go through the comments in the announcement and you will see many example of this!

Market forces working

There were enough indications earlier that the cloud pricing is guided more by the market forces than by the cost of providing the service. Amazon EC2, Rackspace and Microsoft pricing look very similar to each other. Also, the reduction in hardware prices has had no effect on the cloud service pricing.

Now GAE, 24 free instance hours per day looks suspiciously similar to Amazon free EC2 instance of 750 hours per month.

3 Key Decision you need to take while preparing a Cloud Strategy

What do you think of cloud computing? Whether you view the Cloud as the Future of Computing, or feel that Cloud Computing is a massive hype and will shortly blow away – you are likely to be either drafting or implementing some form of a Cloud Strategy.

I am sure you have kept yourself updated on what is happening on cloud computing. You have a good understanding of the possible advantages (cost saving, agility, etc.) of cloud computing. You are also aware of all the concerns of adopting cloud computing (if you want an update – you can have a look at the report from Gartner IT Council on Cloud computing).

However, to make an effective cloud strategy, you need to look beyond the hype and also take a realistic view of the cloud shortcoming. Unfortunately, cloud computing is such a nebulas term that it can be used to encompass almost anything that is connected with IT! So, making an effective strategy for such a vast canvas is daunting task.

However, you task can become much more manageable if you make up your mind on the following three points.

1.  Should “cost saving” be one of the strategic objective?

In theory, the economy of scale of the cloud provider is expected to make the cloud infrastructure to be much more cost effective compared to what can be achieve in your data center.

In practice, assuming that you have already implemented virtualization, the cloud is likely to be cost effective only for some types of usage. This is especially true if you are looking at IaaS (you can have a look at this research finding).

I suspect that the charges for cloud service are dictated more by demand, supply and competition rather by how much it cost to offer such services. Otherwise, how do you explain Amazon AWS and Microsoft Azure making incoming traffic free from the same day?

All of us know that hardware prices keep coming down. In 2 years’ time you can buy a machine with twice as much power at the same cost. However, the cloud pricing has not shown any such trend. So, the migration which might have been cost effective 2 year back may no longer be so.

Yes, you may achieve cost saving by migration to cloud – but that is not the point.

The question is should you make cost saving as a strategic objective for the cloud?

My recommended answer will be – No.

If you have answered yes to this question then you need to identify which of the applications have a variable load pattern and make plans for migrating them to cloud.

2. Is there a business case for increasing the flexibility and agility of your IT operation?

If you are a startup then cloud can make a difference between survival and death. On the other hand, for most main stream organization taking advantage of the flexibility of the cloud is not straight forward.

Yes, you can spin off a cloud instance in minutes but will your existing operational setup allow you to do so and deploy an application in production?

Do you have the following in place?

  • Agile development methodology
  • DevOps practices and self-service deployment
  • Charging user department for usage

Without these practices in place, you will find it difficult to be agile. Implementing these practices will require significant amount of change in how you work. It will require time and effort. It will require people to change their thinking.

So, should agility and flexibility be a strategic objective for the cloud?

My recommendation is – Not unless it is required by business.

If you have answered yes to this question then you need to start from a business objective and find a business sponsor who will be ready to fund the cloud migration.

3. Do you need to leverage the distributed nature of the cloud?

Do you believe that applications can developed without bothering about where it is going to be deployed and still take full benefit of cloud computing? There are some who think so. However, consider the following points:

  1. Cloud computing without variable load does not make sense …
  2. Cloud computing without multiple machine instances does not make sense …
  3. Cloud computing without fault tolerance does not make sense …

Does your current practice of architecting, designing and developing applications take care of the above? If the answer is yes then it is great news – you are already several steps near to moving to cloud. If the answer is no then you need to consider the implication of incorporating these principles.

Here are some pointers for you to get you thinking on this direction:

Moving an existing application may mean complete rewrite.

Therefore, would you want to take advantage of the distributed nature of the cloud?

My recommendation is – Not for existing applications.

If you answer yes to this question then you need to find applications which can benefit from distributed processing.

Have you answered “No” to all the 3 questions?

Then you have simplified life for yourself. Your cloud strategy can focus on one and only one point.

“How can you get rid of all the IT related activities that are not core to your business?”

The essence of cloud computing strategy boils down to this one single point.

You have to decide which of your applications are of strategic importance to you – which of them have embedded code which you consider as your key IPR – which of them use data that are of critical business value?

Such business critical applications can remain with you and you can consider setting up a private cloud for them.

For everything else you can prepare a cloud migration plan. That plan can be mostly based on usage of SaaS taking into account how much life is left in the existing implementation, the cost of maintaining the application and the maturity of the SaaS offering available in the market.

The risk in this approach is that you may be making yourself redundant.

The recent Forrester study seem to indicate that this is really the direction the world is moving. So, if this is the direction in which the world is going to move you better get prepared for it.

Forrester says Future of Cloud is SaaS

Well … they have not actually said it but the data provided in their report on “Sizing the Cloud – Understanding and Quantifying the Future of Cloud Computing” implies so. The projection claims that going forward more than 80% of US cloud revenue will be from SaaS. This trend will continue for next 10 years.

[Forrester has an additional cloud classification of BPaSS which stands for Business-process-as-a-service. It involves provisioning of highly standardized end-to-end business process delivered via dynamic pay-per-use and self-service consumption models. You can think of it as BPO offered in the cloud.]

I tend to agree with them. Here is why …

What is the value proposition of Cloud?

  1. Save cost through better utilization
  2. Improve IT agility
  3. Help focus on core activities

Recent studies have questioned the cost saving potential of cloud. Costs saving possibilities exist when the load is unpredictable and volatile. However, most organizations and for most type of applications this is not true.

In theory, cloud gives you the freedom to start a new machine instance and deploy your application within minutes. In practice, how often will you need such speed? In how many different real world situations will it give you a competitive advantage?

That leaves us with the third option. You would definitely read about the analogy between IT and electricity generation. How electricity generation has move away from captive units to large centralized facilities. How cloud computing will do the same to IT infrastructure.

So, if the cost saving potential of cloud and the need for agile deployment is limited, then the main attraction of cloud computing is to get rid of the entire headache associated with managing the hardware, software and networking setup and focus on your core activity.

But software have become core IP of most organization

True.

But, such core software forms only a small part of any organizations application portfolio. Then again, most organizations will be reluctant to move such software to the cloud.

Vast majority of the application that is expected to move to cloud is of the kind which is not considered as the core activity of an enterprise.

To simplify IT infrastructure management – How does IaaS, PaaS and SaaS help?

Cloud does not help you in managing your client and networking infrastructure. You will more or less have the same overhead irrespective of your choice of IaaS, PaaS or SaaS.

IaaS partially relieves you of the burden of looking after the physical server infrastructure which includes physical security. However, you have to still manage the virtual instances of each server. Though some degree of automation is possible, you will still have to manage each instance of the virtual machine. The concept of Dev-Op is gaining momentum but that only shifts the burden and does nothing to reduce the workload. As some of the recent cloud service outage has demonstrated, you will still have to plan for DR (see this).

PaaS can have to variants. One is a pure PaaS environment like Google App Engine or Microsoft Azure where you can deploy you bespoke application. The other variant is the SaaS extension like Force.com which you can use to enhance or extend you SaaS. Though we cannot draw a clear line between the two (you can implement a bespoke application using Force.com) – I am talking here about those independent PaaS platforms. These platforms relieve you of the effort of managing the machine instance, operating system and most of the system software. However, the biggest challenge for PaaS is that your existing applications will not run on PaaS – so there is a high barrier for adoption.

SaaS may help you achieve cost saving or it may not. I may or may not also make you more agile but what it will definitely do is to relieve you of the necessity of spending effort to keep the applications running. Whether the SaaS provider will be able to match you security and reliability needs are a different question. The implication is that if you find a SaaS offering which meets you need you will no longer have to spend effort to keep the system running. DR also will be the headache of the provider.

Cloud – Market forces working

Some time back I had complained that price of cloud service offering are not coming down fast enough compared to the drop in hardware price reduction. But then we see the following release from Amazon – AWS.

“…we’ve often told you that one of our goals is to drive down costs continuously and to pass those savings on to you…” (see this)

Indeed, from 1st July, they have eliminated the inbound data fee. It used to be US$ 0.10 per GB.

Also they have reduced the outbound data fee between 20% (for lower end of usage) to almost 40% (at the higher end of usage).

Here is the comparison of per GB outbound data fee, before and after 1st July (for US-Standard, US-West and Europe regions):

Slab for Data Transfer per month

Price per GB before 1st July

Price per GB from 1st July

First 1 GB

Free

Free

Up to 10 TB

US$ 0.15

US$ 0.12

Next 40 TB

US$ 0.11

US$ 0.09

Next 100 TB

US$ 0.09

US$ 0.07

Next 350 TB

US$ 0.08

US$ 0.05

Over 500 TB

US$ 0.08

Special price – not disclosed

However, was this change in price done, as claimed by Amazon, to pass the cost reduction benefit to customer? Probably not because few days before this announcement Microsoft had announced that inbound data transfer will be free from 1st July, 2011 (see this).

BI in the Cloud becomes attractive

This change may one up interesting possibilities for doing DW & BI application on the cloud. Earlier, the biggest roadblock for moving such application to cloud was the prohibitive cost of transferring data into the cloud. With the removal of that stumbling block the economics of BI in the cloud looks more attractive.