What is Consumerization of IT?

If you are unfamiliar with the term “Consumerization of IT” or CoIT then you may think that it means “IT has become a consumer product”.

But that is not interpretation which is generally accepted.

Traditionally, adoption of Information Technology used to start with defense & government followed by the business enterprise. Those technologies used to be sold in low volume and high cost. Only over a period of time cost used to come down making it affordable for the individual consumer to adopt such technologies.

Now we see that the trend seem to have reversed. Many of the latest technologies get adopted by the individual consumer first to be followed by business enterprise, defense and government.

This trend – this change in direction of technology absorption – is called Consumerization of IT.

Where did the Term originate?

Douglas Neal and John Taylor seem to have used the term “Consumerization of IT” in 2001. So we can conclude that by 2001 the effect was quiet visible. So, we can safely assume that the trend had already started in the mid-nineties. However, the trend has lately been more visible. There are so many examples:

  • Smart phones – iPhone, Android devices
  • Tablets – iPad
  • Social networking – Facebook, Twitter, LinkedIn
  • Cloud email – Gmail, Hotmail
  • Other cloud hosted services – Drop box, Google Apps

How does this impact you?

As usual, the answer is “it depends” … it depends on what you do!

If you are a typical “knowledge worker” not directly involved with IT then you might see this as an opportunity to use your favorite devices and tools in your workplace.

If you are involved in running the IT setup you will see this as a big headache.

If you are a forward looking manager you may look at this as the lever for collaboration and productivity improvement.

And, if like me, you are involved in technology management then you should be concerned as you may have to discard all the existing theory of technology lifecycle management.

Finally, the consumer in you will hope that all those people who are trying to sell stuff to you will realize that they have to make all the support services more useable or else …

Why is IT concerned about CoIT?

It is about loss of control and unpredictability.

Security concern: With more types of devices, with more services hosted in the cloud, with more mechanism of exchanging information you create more opportunity for hackers and intruders. This is especially true if your organization has to comply with mandatory regulations – on data security, on access control, on network security and many more. Even the laws are less clear on information transmitted through employee accounts and social networks, even when at work.

Device proliferation: However, the impact is not limited to security concern. There is a bigger challenge of making all the services available in the variety of devices which may be in use by your customers, and your employees. Traditionally, IT could make plans to systematically roll out a new type of device, new version of OS, new version of software like browser so it can be ensured that all the services work properly. There could be a delay of months even years before you upgrade. With CoIT you do not have the luxury.

Increasing and variability of transaction load: When mobile apps are in the hands of external parties, it’s hard to know when they’ll be moved to interact with you. Applications cannot have downtime in this world, only varying levels of use around the clock. Not only does the availability of mobile make your user population’s use profiles more variable, you’re now subject to their use profiles, and that can drive enormous traffic to your systems.

And some more points to be concerned about…

  • Alternate solutions are freely available on the net and are so easy to use
  • These technologies are perceived as the catalyst for next generation productivity improvement
  • You have so many always connected people who are on the move

Think about these problems … you may have more sympathy for your IT department and the restrictions put by them!

Scratching the surface

What we have discussed so far is really scratching the surface of CoIT. It is a manifestation of a fundamental change that is sweeping us.

To establish my proposition, I will need to answer 2 questions:

  1. Why is it happening now – what has really changed?
  2. Why did I mention earlier that we may have to discard all the existing theory of technology lifecycle management?

Well – I will need to more posts to explain these.

TOGAF 9.1 Released – What does it mean to you?

If you are planning to take up TOGAF certification examination, you would definitely want to know how the release of TOGAF 9.1 impacts you. You would want to which version you need to study.

Here is the simple guideline. If you are planning to appear for the exam…

  1. …before June 2012 the you should study TOGAF 9
  2. …between June 2012 and May 2013 then you can study either TOGAF 9 of 9.1
  3. …after June 2013 it is only TOGAF 9.1

In a nutshell, if you have already done most of the studying using TOGAF 9.0 then you have slightly more than a year to clear the exam. However, if you have yet to begin the study you better start with 9.1.

What are the main differences between TOGAF 9 and 9.1?

The Open Group has published a presentation in the form of a PDF which provides an overview of the differences – here is the link.

If you would prefer to have a look at the difference as a two pager then I recommend that you go through this post of Mike Walker.

However, I think the biggest difference between the two is how the objectives of each of the ADM phase are written. The latest version seems to be significant improvement. This is also the most important change for those of you who want to appear for the foundation level exam.

You may also need to go through Phase E and F more carefully as they have been reworked.

Comparison of ADM Objectives – TOGAF 9 vs. TOGAF 9.1

Phase Objective as per TOGAF 9 Objective as per TOGAF 9.1
  • To review the organizational context for conducting enterprise architecture
  • To identify the sponsor stakeholder(s) and other major stakeholders impacted by the business directive to create an enterprise architecture and determine their requirements and priorities from the enterprise, their relationships with the enterprise, and required working behaviors with each other
  • To ensure that everyone who will be involved in, or benefit from, this approach is committed to the success of the architectural process
  • To enable the architecture sponsor to create requirements for work across the affected business areas
  • To identify and scope the elements of the enterprise organizations affected by the business directive and define the constraints and assumptions (particularly in a federated architecture environment)
  • To define the ‘‘architecture footprint’’ for the organization — the people responsible for performing architecture work, where they are located, and their responsibilities
  • To define the framework and detailed methodologies that are going to be used to develop enterprise architectures in the organization concerned (typically, an adaptation of the generic ADM)
  • To confirm a governance and support framework that will provide business process and resources for architecture governance through the ADM cycle; these will confirm the fitness-for-purpose of the Target Architecture and measure its ongoing effectiveness (normally includes a pilot project)
  • To select and implement supporting tools and other infrastructure to support the architecture activity
  • To define the architecture principles that will form part of the constraints on any architecture work
  1. Determine the Architecture Capability desired by the organization:
    • Review the organizational context for conducting enterprise architecture
    • Identify and scope the elements of the enterprise organizations affected by the Architecture Capability
    • Identify the established frameworks, methods, and processes that intersect with the Architecture Capability
    • Establish Capability Maturity target
  2. Establish the Architecture Capability:
    • Define and establish the Organizational Model for Enterprise Architecture
    • Define and establish the detailed process and resources for architecture governance
    • Select and implement tools that support the Architecture Capability
    • Define the Architecture Principles
Phase A
  • To ensure that this evolution of the architecture development cycle has proper recognition and endorsement from the corporate management of the enterprise, and the support and commitment of the necessary line management
  • To define and organize an architecture development cycle within the overall context of the architecture framework, as established in the Preliminary phase
  • To validate the business principles, business goals, and strategic business drivers of the organization and the enterprise architecture Key Performance Indicators (KPIs)
  • To define the scope of, and to identify and prioritize the components of, the Baseline Architecture effort
  • To define the relevant stakeholders, and their concerns and objectives
  • To define the key business requirements to be addressed in this architecture effort, and the constraints that must be dealt with
  • To articulate an Architecture Vision and formalize the value proposition that demonstrates a response to those requirements and constraints
  • To create a comprehensive plan that addresses scheduling, resourcing, financing, communication, risks, constraints, assumptions, and dependencies, in line with the project management frameworks adopted by the enterprise (such as PRINCE2 or PMBOK)
  • To secure formal approval to proceed
  • To understand the impact on, and of, other enterprise architecture development cycles ongoing in parallel
  1. Develop a high-level aspirational vision of the capabilities and business value to be delivered as a result of the proposed enterprise architecture
  2. Obtain approval for a Statement of Architecture Work that defines a program of works to develop and deploy the architecture outlined in the Architecture Vision


Phase B
  • To describe the Baseline Business Architecture
  • To develop a Target Business Architecture, describing the product and/or service strategy, and the organizational, functional, process, information, and geographic aspects of the business environment, based on the business principles, business goals, and strategic drivers
  • To analyze the gaps between the Baseline and Target Business Architectures
  • To select and develop the relevant architecture viewpoints that will enable the architect to demonstrate how the stakeholder concerns are addressed in the Business Architecture
  • To select the relevant tools and techniques to be used in association with the selected viewpoints
  1. Develop the Target Business Architecture that describes how the enterprise needs to operate to achieve the business goals, and respond to the strategic drivers set out in the Architecture Vision, in a way that addresses the Request for Architecture Work and stakeholder concerns
  2. Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Business Architectures


Phase C The objective of Phase C is to develop Target Architectures covering either or both (depending on project scope) of the data and application systems domains.Information Systems Architecture focuses on identifying and defining the applications and data considerations that support an enterprise’s Business Architecture; for example, by defining views that relate to information, knowledge, application services, etc.
  1. Develop the Target Information Systems (Data and Application) Architecture, describing how the enterprise’s Information Systems Architecture will enable the Business Architecture and the Architecture Vision, in a way that addresses the Request for Architecture Work and stakeholder concerns
  2. Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Information Systems (Data and Application) Architectures


Phase D The Technology Architecture phase seeks to map application components defined in the Application Architecture phase into a set of technology components, which represent software and hardware components, available from the market or configured within the organization into technology platforms.As Technology Architecture defines the physical realization of an architectural solution, it has strong links to implementation and migration planning.Technology Architecture will define baseline (i.e., current) and target views of the technology portfolio, detailing the roadmap towards the Target Architecture, and to identify key work packages in the roadmap. Technology Architecture completes the set of architectural information and therefore supports cost assessment for particular migration scenarios. 
  1. Develop the Target Technology Architecture that enables the logical and physical application and data components and the Architecture Vision, addressing the Request for Architecture Work and stakeholder concerns
  2. Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Technology Architectures


Phase E
  • To review the target business objectives and capabilities, consolidate the gaps from Phases B to D, and then organize groups of building blocks to address these capabilities
  • To review and confirm the enterprise’s current parameters for and ability to absorb change
  • To derive a series of Transition Architectures that deliver continuous business value (e.g., capability increments) through the exploitation of opportunities to realize the building blocks
  • To generate and gain consensus on an outline Implementation and Migration Strategy
  1. Generate the initial complete version of the Architecture Roadmap, based upon the gap analysis and candidate Architecture Roadmap components from Phases B, C, and D
  2. Deter mine whether an incremental approach is required, and if so identify Transition Architectures that will deliver continuous business value


Phase F
  • To ensure that the Implementation and Migration Plan is coordinated with the various management frameworks in use within the enterprise
  • To prioritize all work packages, projects, and building blocks by assigning business value to each and conducting a cost/business analysis
  • To finalize the Architecture Vision and Architecture Definition Documents, in line with the agreed implementation approach
  • To confirm the Transition Architectures defined in Phase E with relevant stakeholders
  • To create, evolve, and monitor the detailed Implementation and Migration Plan providing necessary resources to enable the realization of the Transition Architectures, as defined in Phase E
  1. Finalize the Architecture Roadmap and the supporting Implementation and Migration Plan
  2. Ensure that the Implementation and Migration Plan is coordinated with the enterprise’s approach to managing and implementing change in the enterprise’s overall change portfolio
  3. Ensure that the business value and cost of work packages and Transition Architectures is understood by key stakeholders


Phase G
  • To formulate recommendations for each implementation project
  • To govern and manage an Architecture Contract covering the overall implementation and deployment process
  • To perform appropriate governance functions while the solution is being implemented and deployed
  • To ensure conformance with the defined architecture by implementation projects and other projects
  • To ensure that the program of solutions is deployed successfully, as a planned program of work
  • To ensure conformance of the deployed solution with the Target Architecture
  • To mobilize supporting operations that will underpin the future working lifetime of the deployed solution
  1. Ensure conformance with the Target Architecture by implementation projects
  2. Perform appropriate Architecture Governance functions for the solution and any implementation-driven architecture Change Requests


Phase H
  • To ensure that baseline architectures continue to be fit-for-purpose
  • To assess the performance of the architecture and make recommendations for change
  • To assess changes to the framework and principles set up in previous phases
  • To establish an architecture change management process for the new enterprise architecture baseline that is achieved with completion of Phase G
  • To maximize the business value from the architecture and ongoing operations
  • To operate the Governance Framework
  1. Ensure that the architecture lifecycle is maintained
  2. Ensure that the Architecture Governance Framework is executed
  3. Ensure that the enterprise Architecture Capability meets current requirements


Here are the links to the material from The Open Group

Brief History of Agile Movement

In February this year agile movement completes 11 years of existence. I am sure you are either using some form of agile methodology or examining the possibility of using them. But, are you aware of how the agile movement happened? Did it happen by chance or was it inevitable? Do you know what influenced the agile manifesto? Who the authors are? What are their backgrounds and what do they do now? How was the name “Agile” selected?

The Influencers

It is clear from the notes published by Jon Kern that 4 methodologies had significant influence on the manifesto – they are:

  1. Scrum (Jeff Sutherland and Ken Schwaber – also Mike Beedle)
  2. DSDM (DSDM Consortium represented by Arie van Bennekum)
  3. ASD (Jim Highsmith)
  4. XP (Kent Beck, Ward Cunningham and Ron Jeffries – Martin Fowler)

Prior to the meet all these methodologies were classified as “Lightweight Methodologies”. The meet happened as a logical consequence of an earlier get together of XP proponents organized by Kent Beck. The push for the actual meet came from Bob Martin. Here are the milestones (1992-2003) that had significant impact on the movement. Also, I have tried to attach a face to every name – hope you find it interesting.

1992 – Crystal Methods

  Crystal was the starting point of the evolution of software development methodologies which ultimately resulted in what we know as agile movement. The honor of creating Crystal goes to Alistair Cockburn. The methodology was named “Crystal” only in 1997.

Crystal can be applied to teams of up to 6 or 8 co-located developers working on systems that are not life-critical. You can see the seeds of agile manifesto in Crystal because it focuses on – (1) Frequent delivery of usable code to users, (2) Reflective improvement and (3) Osmotic communication preferably by being co-located.

Here is a post by him on “Notes on the writing of the agile manifesto“.

He is a consulting fellow at Humans and Technology which he had founded. (See: His Biography page)

I could not locate him in LinkedIn.

1993 – Refactoring

  Refactoring was coined by Bill Opdyke in a paper titled “Creating Abstract Superclasses by Refactoring”.This is how Wikipedia describes code refactoring:

Code refactoring is “disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior”, undertaken in order to improve some of the nonfunctional attributes of the software. 

He is now the Architecture Lead at JPMorgan Chase. (Soure: LinkedIn profile)

1994 – Dynamic Systems Development Method

DSDM, unlike all the other items listed in this post, was created by a Consortium. The consortium was an association of vendors and experts in the field of software engineering. The objective was to “jointly developing and promoting an independent RAD framework” by combining their best practice experiences.

There isn’t any individual who can be credited with the creation of DSDM but Jennifer Stapleton, as one of the founder member of DSDM consortium was instrumental in the initial compilation of thoughts.

She is now a management consultant in UK. (See: LinkedIn profile)

  Arie van Bennekum, one of the authors of the agile manifesto has been actively involved in DSDM and the DSDM Consortium since 1997.

DSDM focuses on the following 8 principles of (1) Focus on the business need, (2) Deliver on time, (3) Collaborate, (4) Never compromise quality, (5) Build incrementally from firm foundations, (6) Develop iteratively, (7) Communicate continuously and clearly and (9) Demonstrate control. Again, you can see the seeds of agile manifesto!

He is now a Senior Consultant, Programmanager, Project Manager, Facilitator, Trainer,  Coach, Mentor, Teacher etc. in Netherlands. (See: LinkedIn profile)

1995 – Scrum and Pair Development


  SCRUM was jointly created by Jeff Sutherland and Ken Schwaberwho presented a paper describing it at OOPSLA ’95 in Austin, Texas.

Jeff Sutherland is the CEO at Scrum, Inc. (Source: LinkedIn profile).

Ken Schwaber is a founder of Scrum.org. (Source: LinkedIn profile).

Mike Beedle has been one of the very early adopter on Scrum and has introduced Scrum to many organizations since the mid-90’s.

As you know Scrum has practically been the de facto standard for agile.

He is now the Founder and CEO at Enterprise Scrum. (See: LinkedIn profile)

Pair Development

  Pair Development as a concept was simultaneously but independently written about by more than one person.

Jim Coplien published a paper titled “A Development Process Generative Pattern Language” which contained a pattern “Developing in Pairs”.

He is a Lean and Agile Software Development Coach in Denmark. (Source: LinkedIn profile)

  Larry Constantine talked about “Dynamic Duos” in his book “Constantine on Peopleware” published in the same year. This concept went on to become an integral part of Extreme Programming.

Though lot of research has been conducted to show the effectiveness of pair programming, the concept or philosophy does not really reflect in the Agile Manifesto.

He is now a Novelist, and University Professor in USA, (Source: LinkedIn profile)

1997 – Feature Driven Development

  Feature Driver Development was initially devised by Jeff De Luca.

The best practices of FDD are, (1) Domain Object Modeling, (2) Developing by Feature, (3) Individual Class (Code) Ownership, (4) Feature Teams, (5) Inspections, (6) Configuration Management, (7) Regular Builds and (8) Visibility of progress and results.

Interestingly, “Individual Class (Code) Ownership” goes against the concept joint code ownership which is considered a key practice today.

He is now the President at Nebulon. (Source: LinkedIn profile)

  However, the FDD process was explained to the world through the publication of the book “Java Modeling in Color with UML: Enterprise Components and Process” which he coauthored with Peter Coad.

He had built and sold TogetherSoft to Borland. Currently he is into many things other than Agile! (See: petercoad.com)

He has a LinkedIn page but it is empty with no connection!

  Jon Kern, one of the authors of the agile manifesto, had closely worked with both Jeff De Luca and Peter Coad and had helped shape the charter on FDD.

Here are his “Agile Manifesto Notes – Feb 2001, Snowbird, Utah“. These have been dug out and hosted by Jeff Sutherland.

He describer himself as Software Development Quarterback and is associated with multiple companies. (See: LinkedIn profile)

1999 – Many Things Happened

Adaptive Software Development

  Jim Highsmith formalized the Concept of Adaptive System Development and published a book with the same name.

The idea grew out of his work on Rapid Application Development methodologies.He proposed a three phase lifecycle of – (1) Speculation, (2) Collaboration and (3) Learning.

He has also written the history or the story behind the formulation of agile manifesto.He is now an Executive Consultant at ThoughtWorks. (See: LinkedIn profile)

The Pragmatic Programmer

  Andrew Hunt published the book The Pragmatic Programmer: From Journeyman to Master.

The book laid out characteristics of a pragmatic programmer as the one who is (1) Early adopter / fast adapter, (2) Inquisitive, (3) Critical thinker, (4) Realistic and (5) Jack-of-all-trades.

He describes himself as Pragmatic /ndy — speaker, author, publisher! (See: LinkedIn profile)

  The coauthor of the book was Dave Thomas.If you go through the detailed list of recommendation you will see its influence on the manifesto.

Here are his recollection of the what transpired in the meet in 2001 February – “Some Agile History“.

He describes himself as a Software Visionary! (See: LinkedIn profile)

Extreme Programming, User Stories, Release Planning and Continuous Integration

  While Kent Beck was working at Chrysler he developed the concept of Extreme Programming. He published the method in 1999 as a book – Extreme Programming Explained.As a part of Extreme Programming, he also introduced the concept of User Stories and Release Planning.

The methodology specifies best practices for planning, managing, designing, coding and testing.

He is at Facebook and calls himself a Programmer!! (See: LinkedIn profile)

  Apart from being a collaborator for the in XP, Ward Cunninghamis also as the creator of the Wiki.

Apart from being the Founder of Cunningham & Cunningham, he is also the CTO at CitizenGlobal. (See: LinkedIn profile)

  Ron Jeffrieswas also the collaborator and three of them together are considered as the founder of XP.

His biography page states that he developing software longer than most people have been alive. (See: Biographical Notes).

I could not locate him in LinkedIn.

  Though some people think that Martin Fowler introduced the term Continuous Integration in reality CI has also been coinedby Kent Beck.

Here is his recollection on the “Writing The Agile Manifesto“.

He calls himself an author and speaker and is working with Thoughtworks. (See: About Martin Fowler)

I could not locate him in LinkedIn.

2000 – Events leading up to the Manifesto

  Bob Martintook the initiative to get the ball rolling on organizing the historic meeting to be held on February 2001 at “The Lodge” at Snowbird ski resort in the Wasatch Mountains of Utah.

He is the Owner of Uncle Bob Consulting. (See: LinkedIn profile)

2001 – Agile Manifesto

2001 February + ‘The Lodge’ at Snowbird Ski Resort + 17 Thinkers = Agile Manifesto

Kent Beck, Mike Beedle, Arie van BennekumAlistair CockburnWard CunninghamMartin Fowler, James GrenningJim HighsmithAndrew HuntRon Jeffries, Jon KernBrian Marick, Bob MartinStephen MellorKen SchwaberJeff Sutherland, and Dave Thomas

2002 – More Agile Concepts

Test Driven Development

  For TDD the credit goes to Kent Beck. The concept of Test Driven Development also originated from XP test-first approach. It was given a shape later by Kent Beck through the book Test Driven Development: By Example.

Planning Poker

  The concept of Planning Poker was formulated by James Grenning.

Here is the original paper.

He is the Founder of Renaissance Software Consulting. (Source: LinkedIn Profile)

What about Brian Marick and Stephen Mellor?

  He is the Owner at Exampler Consulting and calls himself Software consultant, specializing in agile methods with a testing slant. (See: LinkedIn profile)
  He calls himself a “Freeter”, a Japanese word, derived from English, that means “free agent.” (Source: His home page)He resides in Zimbabwe and here is his LinkedIn profile.

2003 – Lean Software Development

Is Lean Software Developmentan extension to agile methodology? Should we look at it as something distinct from agile? Should it find a place in this post? I have included it for the primary reason that many agilists consider it to be one of the future directions of agile movement.Anyway; term was coined by Mary Poppendieck and Tom Poppendieckin 2003.

It is an adaptation of lean manufacturing principles and practices to the software development. There are seven principles – (1) Eliminate waste, (2) Amplify learning, (3) Decide as late as possible, (4) Deliver as fast as possible, (5) Empower the team, (6) Build integrity in and (7) See the whole. Amplify learning, deliver as fast as possible, empower the team etc. goes very well with agile principles.

I am not so sure about eliminate waste and see the whole.

Cross-Platform Mobile Game Development – a Tool Comparison

Mobile game development has a world of its own. You will come across different set of programming languages which you would not have encountered elsewhere – Lua, Live Code, Unreal Script, Boo etc. Some of these tools are a derivative or an extension of what is available on other gaming platform while others have been explicitly developed for mobile. At least one of these platform may seize to become a game development platform and become an enterprise cross-platform mobile application development solution.

As I have mentioned earlier (here it is), there are five approaches to cross platform mobile application development and many tools are available under each category. They are:

(1)    Mobile Web (JavaScript-CSS library), (see this)

(2)    Visual Tool (No access to Code), (see this)

(3)    App Generator (Native application for multiple platforms), (see this)

(4)    Hybrid App (Leverages embedded browser control) (see this) and

(5)    Game Builder.

Here are 5 hybrid tools – the ordering is alphabetic.

1. Bedrock (Metismo)

  • Home page: Link
  • Genesis: Has been acquired by Software AG – rebranded as  webMethods Mobile Designer
  • Language: Java & Cross compiler
  • Version: –
  • Licensing: detail not available
  • Download: no
  • Documentation: not available
  • Sample application: not available
  • Implementation: FinBlade, Xendex
  • Wikipedia: Link

2. Corona (Ansca)


4. LiveCode (RunRev)

5. Marmalade

  • Home page: Link
  • Genesis: It is from Ideaworks3D which has been into cross-platform technology and games software since 1998
  • Language: Visual C++
  • Version: 5.2
  • Licensing: Free evaluation – application cannot be distributed
  • Download: Link
  • Documentation: Index
  • Sample application: Getting Started
  • Implementation: Index
  • Wikipedia: Link
  • Additional: IwGame framework for marmalade
  • Article on how to use: DrMop

6. Moai

7. Unity 3

8. Unreal

9. XPower++

  • Home page: Link
  • Genesis: It has background in cross-compiler for grid computing
  • Language: Basic++, C++, Java++, and Pascal++ language dialects
  • Version:
  • Licensing:
  • Download: Link
  • Documentation:  Index
  • Sample application: (see documentation index)
  • Implementation: ?
  • Wikipedia: Link

Do let me know if there are any errors and omissions in the details I have provided.

More Tool Comparisons

Here are references to articles written by others comparing different cross-platform tools:

Previous>> (Hybrid Mobile Tools)

Amazon AWS 19th Price Reduction – a Closer Look

If you have been following me regularly then you would know that I maintain that charges for cloud services are not coming down in proportion to the routine reduction of hardware cost. So, let us examine in Amazon has reversed the trend with its latest announcement of price reduction. On 6th March, 2012 Amazon announced:

“…a reduction in Amazon EC2, Amazon RDS, and Amazon ElastiCache prices. Reserved Instance prices will decrease by up to 37% for Amazon EC2 and by up to 42% for Amazon RDS across all regions. On-Demand prices for Amazon EC2, Amazon RDS, and Amazon ElastiCache will drop by up to 10%. We are also introducing volume discount tiers for Amazon EC2, so customers who purchase a large number of Reserved Instances will benefit from additional discounts. Today’s price drop represents the 19th price drop for AWS**, and we are delighted to continue to pass along savings to you as we innovate and drive down our costs…”

**since the launch of the service – (my addition)

How much of this (37%, 42%, 19th etc.) is sales talk and how much of it is reality? Let me just present the data that I have collected.

Have a look at the data you be the judge.

[Update: Why am I not surprised that Google drops the price of Cloud Storage service within a week?]

[Update: This was going to happen and it has happened within 10 days! Microsoft Trying Hard to Match AWS, Cuts Azure Pricing]

How does the current price compare with what existed in January, 2010

I had taken a dipstick of the prevailing AWS prices on Jan-10 which you can check here. There is no doubt that the breadth and the depth of offering has increased significantly, but we can still compare the price of those offering which existed then.

Here is a comparison between prices then and price now.

Jan-10 Mar-12
On-Demand Instances Small – Linux – N. Virginia $0.085 $0.080
Quadruple Extra Large – Windows – N. California $3.160 $2.504
Data Transfer – In Free till June 2010 Free
Data Transfer – Out Per GB depending on the total monthly volume $0.1 to $0.17 $0.05 to $0.12
Storage (EBS) Per allocated GB per month $0.10 $0.10
I/O Requests Per million I/O $0.10 $0.10

Price Reduction in last 3 years

I could locate 12 instances of price reduction announcement in last 3 years. Except for the current announcement all other have lowered price on only one element of AWS. Here is the summary:

  • On-demand instance: 2 Times (Mar-12 & Oct-09)
  • Reserved Instances: 2 Times (Mar-12 & Augt-09)
  • Storage: 3 Times (Mar-12, Feb-11 & May-10)
  • Data Transfer: 2 Times (Jul-11 & Feb-10)
  • Cloud Front: 2 Times (Jul-11 & Jun-10)
  • Cloud Watch: 1 Time (May-11)
  • Premium Support: 1 Time (Jan-11)

Here is the full list:

New, lower pricing for Amazon EC2, RDS, and ElastiCache Mar-12 Reduction in Amazon EC2, Amazon RDS, and Amazon ElastiCache prices. Reserved Instance prices will decrease by up to 37% for Amazon EC2 and by up to 42% for Amazon RDS across all regions. On-Demand prices for Amazon EC2, Amazon RDS, and Amazon ElastiCache will drop by up to 10% All
New Lower Pricing Tiers for Amazon CloudFront Jul-11 Lowered prices for Amazon CloudFront — we’ve added new usage tiers in every region, and in the US and Europe we’ve reduced data transfer pricing in every existing tier Cloud Front
AWS Lowers Data Transfer Costs – Effective May 1 Jul-11 We will no longer charge a separate fee for internet data transfer in. For internet data transfer out, in the US and Europe we’ve reduced the price at every existing usage tier and in all regions Data Transfer
Amazon CloudWatch Announces Custom Metrics, Lower Prices for Amazon EC2 Monitoring May-11 We are lowering the price of existing Detailed Monitoring for Amazon EC2 instances by 68% to $3.50 per instance per month Cloud Watch
Amazon S3 announces new lower prices for standard storage Feb-11 All Amazon S3 standard storage customers will see a reduction in their storage costs. For instance, if you store 50 TB of data on average, you’ll see a 12% reduction in costs, and if you store 500 TB of data on average, you’ll see a 13.5% reduction in costs Storage
AWS Introduces New Premium Support Plans, Lowers Usage Prices by 50% on Existing Plans Jan-11 Usage pricing on existing Premium Support Gold and Silver offerings by 50% Premium Support
Amazon CloudFront Adds HTTPS Support, Lowers Prices, Opens NYC Edge Location Jun-10 Reduced our pricing for regular HTTP requests by 25%: prices for HTTP requests now start at $0.0075 per 10,000 requests Cloud Front
New Lower Prices for High Memory Double and Quadruple XL Instances May-10 Lowered the On-Demand and Reserved prices for High Memory Double Extra Large (m2.2xlarge) and Quadruple Extra Large (m2.4xlarge) DB Instances Storage
AWS Announces Lower Pricing for Outbound Data Transfer Feb-10 Lowering AWS pricing for outbound data transfer by $0.02 across all of our services, in all usage tiers, and in all Regions Data Transfer
Announcing Lower Amazon EC2 Instance Pricing Nov-09 Lowering prices up to 15% for all On-Demand instance families and sizes On-Demand Instances
New Lower Price for Windows Instances with Authentication Services Oct-09 Removed the distinction between Amazon EC2 running Windows and Amazon EC2 running Windows with Authentication Services On-Demand Instances
New Lower Prices for Amazon EC2 Reserved Instances Aug-09 Lowered the one-time fee for all Amazon EC2 Reserved Instances by 30% Reserved Instances

Amazon AWS pricing details

Here is a Snapshot of the pricing as it exists now (March, 2012)

I will be able to use this in future for comparison!

Succeed or Fail – Windows 8 will be a Game Changer

You may be thinking how a failure can be a game changer. Yes, it is easy to understand that if Windows 8 succeeds then the tablet and smart phone computing would be changed forever, but how can it change the game by failing?

Well – if Windows 8 fails then it would be an official endorsement of the end of an era – the era of the supremacy of the personal computer.

Some of you would argue that the era has already ended and the failure of Windows 8 is already a foregone conclusion. But, you will probably be in the minority.

Others may argue that every alternate version of Windows release has been a failure and even if Windows 8 fail (like Vista), we will still have Windows 9 which will be a success (like Windows 7). However, I think the situation is different now.

What is the main proposition of Windows 8?

Actually, there are two propositions.

  1. User will prefer to interact with all computing device irrespective of the screen size and method of interaction in a consistent manner
  2. Metro UI Tiles is a better way to interact than the traditional Icon based interface

Microsoft had tried using the PC interface on tablet and smartphone and it has consistently failed. Now Microsoft is attempting the reverse. It is obvious that Metro UI is primarily designed for touch screen and Microsoft is attempting to use it for traditional PC & Laptop.

I think the underlying assumption is that most screens of the future will be touch enabled. There is also an assumption that the gap between a Laptop and a tablet will narrow or even disappear.

For once Microsoft is not copying Apple

Yes, the Metro UI is innovative and the credit goes completely to Microsoft. It is distinctly different from what Apple has to offer.

Users may accept the Metro UI or users may reject it but the credit or the blame will rest solely on Microsoft’s shoulder.

What happens if Windows 8 succeeds?

Obviously, Microsoft would definitely have reestablished its supremacy on the OS market. Nokia, Dell, HP, Intel and many others will heave a sigh of relief.

Metro UI would have been accepted by users as a better way of interacting with touch screens. Apple and Google would need to come up with a response.

People who claim that the future belongs to Apple, Google and Amazon will need to revise their opinion – and yes, it would be game changer.

What happens if it succeeds only on Tablet and fails on PC?

It would be an official endorsement of the end that PC and Tablet are different. All those who depend on PC and Laptop sales will need to reinvent themselves or perish.

Metro UI would have established itself as an alternate way of interacting with touch screen. There will be serious competition to Apple and Google.

You will have a new kid in the block, the Metro UI, who will get all the attention of the developers and UI design would be altered forever – a game changer on its own right.

What happens if it fails?

Then it would definitely be the end of the PC era – and a game changer.

PC and Laptop will not die overnight but it would slowly loose its significance and along with it many big names of today will find themselves in a similar boat.

What is the time frame to judge the success or failure?

One year is too short a time and five years is too long. I think 3 years is the right timeframe to pass the judgment.

If by end of 2015, Windows 8 (or 9) has not become the de facto standard, or at least become a strong alternative we can consider the game to be over for Windows.

Can Every Agile Team Self-Organize?

Statement (A): We know that some teams which have self-organized itself is much more productive compared to a team with similar set of members where the team organization has been prescribed from outside.

Statement (B): Self- organizing teams will always outperform an equivalent team with an imposed organization.

Is there a difference between the two statements or am I only playing with words?

Actually, the difference is enormous – and – of great practical significance.

To prove statement (A) we need to show examples of self-organizing team outperforming teams which are not self-organizing. Even examples of increase of team productivity through transition from traditional structure to self-organization mode will be sufficient.

On the other hand proving statement (B) is much more difficult – if not impossible.

Even one example of a traditional team being more productive or a fail attempt to improve productivity through self-organization will be sufficient to disprove the statement.

What is the difference between (A) and (B)?

Incidentally, if (B) is true then (A) has to be true – while the reverse is not correct.

Statement (A) can be restated as:

“…SOME self-organizing teams can be significantly more productive…”

And the statement (B) can be reworded as:

“…EVERY team can benefit from self-organization…”

Look at the emphasis on SOME and EVERY – that is the difference between (A) and (B). We can be reasonably sure that (A) is true, but what about (B)?

What happens if (B) is true?

If every team can benefit from self-organization than all you need to do if understand how to achieve it – what are the “do’s and don’ts”? Since most (all?) experts in agile community makes this assumption, there are enough advices available on how to achieve it.

Your task becomes much simpler. Not that it is easy to get a traditional team to self-organize it is much simpler compare to the alternative where you have to decide if the team will be capable of benefiting from self-organization.

What if (B) is not true?

When I say (B) is not true, what I mean is there CAN be teams which will not improve its performance by becoming self-organizing and the performance may even come down.

Let me just rephrase the above statement:

“…there are examples of team which has failed to self-organize or their performance has gone down after self-organization…”

I am sure you can find such examples and I don’t think it would be too difficult to do so.

It is possible to analyze these failures and point out what mistakes were made in the approach and give recommendation on how to avoid such pitfall. However, if the recommendation contains any one of the following then we may be indirectly accepting the fact that (B) is false.

Does it say that the Scrum Master was interfering too much with the working of the team? Does it say that the team needed more time to self-organize? Does it say that some member of the team was too dominating? Does it say that some of the key members of the team could not get along with each other? Does it say that some of the team members were too inexperienced?

In short, is there in suggestion that the team composition or the scrum master needs to be changed or they alter their attitude significantly?

This is as good as saying that this team – given its current composition – cannot self-organize.

In real life will you always have the luxury to select the right team?

  • What if you the team composition is given and cannot be changed?
  • What if the project time frame is too short to get people to change their attitude?
  • What if you cannot find more experienced people?
  • What if your key technical person has an attitude problem?
  • What if two key members of the team cannot get along with each other?

Such things happen in real life – so what should you do? Do you change the composition of the team and try to create a self-organizing team or do you resort to some amount of command and control?


Depending on how you answer the previous question and how firm a believer are you on the effectiveness of self-organization – you can do one of the two things while starting a new project with a new team:

  1. No matter what, you assume that the team will self-organize and work towards that.
  2. You take a pragmatic view of the team composition and decide how much the team can self-organize and how much command & control is needed.

I am sure you would have guessed that I have a leaning towards the second option.