Blog Archive

Proving Master/Slave Clusters Work and Learning along the Way

2009 has been a big year for Tungsten. In January we had (barely) working replication for MySQL. It had some neat features like global IDs and event filters, but to be frank you needed imagination to see the real value. Since then, Tungsten has grown into a full-blown database clustering solution capable of handling a wide range of user problems. Here are just a few of the features we completed over the course of the year:
  • Autonomic cluster management using business rules to implement auto-discovery of new databases, failover, and quick recovery from failures
  • Built-in broadcast monitoring of databases and replicators
  • Integrated backup and restore operations
  • Pluggable replication management, proven by clustering implementations based on PostgreSQL Warm Standby and Londiste
  • Multiple routing mechanisms to provide seamless failover and load balancing of SQL
  • Last, but not least, simple command line installation to configure and start Tungsten in minutes
You can see the results in our latest release, Tungsten 1.2.1, which comes in both open source and commercial flavors. (See our downloads page to get software as well as documentation.)

In the latter part of 2009 we also worked through our first round of customer deployments, which was an adventure but helped Tungsten grow enormously. Along the way, we confirmed a number of hunches and learned some completely new lessons.
  • Hardware is changing the database game. In particular, performance improvements are shifting clustering in the direction of loosely coupled master/slave replication rather than tightly coupled multi-master approaches. As I laid out in a previous article, the problem space is shifting from database performance to availability, data protection, and utilization.
  • Software as a Service (SaaS) is an important driver for replication technology. Not only is the SaaS sector growing, but even small SaaS applications can result in complex database topologies that need parallel, bi-directional, and cross-site replication, among other features. SaaS business economics tend to drive building these systems on open source databases like MySQL and PostgreSQL. By supporting SaaS, you support many other applications as well.
  • Cluster management is hard but worthwhile. Building distributed management with no single points-of-failure is a challenging problem and probably the place where Tungsten still has the most work to do. Once you get it working, though, it's like magic. We have been focused on trying to make management procedures not just simple but wherever possible to do away with them completely by making the cluster self-managing.
  • Business rules rock. We picked the DROOLS rule engine to help control Tungsten and make it automatically reconfigure itself when data sources appear or fail. The result has been an incredibly flexible system that is easy to diagnose and extend. Just one example: floating IP address support for master databases took 2 hours to implement using a couple of new rules that work alongside the existing rule set. If you are not familiar with rules technology, there is still time to make a New Year's resolution to learn it in 2010. It's powerful stuff.
  • Clustering has to be transparent. I mean really transparent. We were in denial on this subject before we started to work closely with ISPs, where you don't have the luxury of asking people to change code. Tungsten Replicator is now close to a drop-in replacement for MySQL replication as result. We also implemented proxying based on packet inspection rather than translation and re-execution to raise throughput and reduce incompatibilities visible to applications.
  • Ship integrated, easy-to-use solutions. We made the mistake of releasing Tungsten into open source as a set of components that users had to integrate themselves. We have since recanted. As penance we now ship fully integrated clusters with simple installation procedures even for open source editions and are steadily extending the installations to cover not just our own software but also database and network configuration.
Beyond the features and learning experiences the real accomplishment of 2009 was to prove that integrated master/slave clusters can solve a wide range of problems from data protection to HA to performance scaling. In fact, what we have implemented actually works a lot better than I expected when we began to design the system back in 2007. (In case this sounds like a lack of faith, plausible ideas do not not always work in the clustering field.) If you have not tried Tungsten, download it and see if you share my opinion.

Finally, keep watching Tungsten in 2010. We are a long way from running out of ideas for making Tungsten both more capable and easier to use. It's going to be a great year.

Smart Techology For The Smart Enterprise

The web is abuzz with analyses on Google’s real time search effectiveness assessed in the context of the death of an Hollywood actress today. The question there is how real time is realtime? Google has demonstrated through algorithmic means(without human intervention), the capability to report results in a search query in less than 3 mts(Read Google’s Matt Cutt’s comments therein!) after the reporting of the event! What an amazing progress shown by Google here!

Recently, Morgan Stanley brought out the fact that the mobile Internet is ramping faster than desktop Internet did, and believes more users may connect to the Internet via mobile devices than desktop PCs within 5 years – this is pushing things hard for service providers. Massive mobile data growth is driving transitions for carriers and equipment providers. This is slated to increase over time with emerging markets embracing mobile technologies much faster and pushes the material potential for mobile Internet user growth. Low penetration of fixed-line telephone and already vibrant mobile value-added services mean that for many EM users and SMEs, the Internet will be mobile.
As the MIT journal notes it, both Google and Microsoft are racing to add more real-time information to their search results, and a slew of startups are developing technology to collect and deliver the freshest information from around the Web. But there's more to the real-time Web than just microblogging posts, social network updates, and up-to-the-minute news stories. Huge volumes of data are generated, behind the scenes, every time a person watches a video, clicks on an ad, or performs just about any other action online. And if this user-generated data can be processed rapidly, it could provide new ways to tailor the content on a website, in close to real time.
Many different approaches are possible here to realize this opportunity – currently many web companies already use analytics to optimize their content throughout the course of a day. The aggregation sites , for example, tweak the layout on their home page by monitoring the popularity of different articles. But traditionally, information has been collected, stored, and then analyzed afterward. The MIT journal article correctly highlights that using seconds-old data to tailor content automatically is the next step. In particular, a lot of the information generated in real-time relates to advertising. Real-time applications, whether using traditional database technology or Hadoop, stand to become much more sophisticated going forward. "When people say real-time Web today, they have a narrow view of it--consumer applications like Twitter, Facebook, and a little bit of search," Startups are also beginning to look at different technologies to capture and everage on current data and patterns therein.
Last month, in a very nice article, The Economist highlights, thanks to Moore’s law (a doubling of capacity every 18 months or so), chips, sensors and radio devices have become so small and cheap that they can be embedded virtually anywhere.

Today, two-thirds of new products already come with some electronics built in. By 2017 there could be 7 trillion wirelessly connected devices and objects—about 1,000 per person. Sensors and chips will produce huge amounts of data. And IT systems are becoming powerful enough to analyse them in real time and predict how things will evolve.
Building on the smart grids space – wherein colossal waste in transmission , distribution and consumption could get positively reduced, the Economist notes that it is in big cities that “smartification” will have the most impact. A plethora of systems can be made more intelligent and then combined into a “system of systems”: not just transport and the power grid, but public safety, water supply and even health care (think remote monitoring of patients). With the help of Cisco, another big IT firm, the South Korean city of Incheon aims to become a “Smart+Connected” community, with virtual government services, green energy services and intelligent buildings.

If one were to look at a different place –inside the enterprises, collaboration is enabling people coming together more easily and readily than before – but the upside potential there is very high with possibilities like workflow getting baked natively into documents enabling real time communication amongst the stakeholders.

What is that technology which would enable all this to happen covering data, transaction, content, collaboration, streaming etc inside smart enterprises? It is Complex Event Processing or CEP, a technology in transition- hallmark of any maturing technology and more importantly its potential – as a precursor to enabling enterprises to get ready for more sophistication in autonomous realtime decision making..


At its elements – CEP collects data from various sources and prime raw events & then applies smart algorithms(this learns with time) and invokes right set of rules to decipher patterns in real time – complex scenarios and distills trends therein and churns out results enabling real time decision making. Smart enterprises can ill afford not having such technologies deployed internally. IBM calls this smart planet revolution- all tech majors ranging from software players to infrastructure players want to have a decisive play herein. Every utility service, all competitive sectors like transport and retails etc are looking at adopting this technology aggressively in areas like supply chain management, finance optimization etc.
But smart infrastructure alone won’t be sufficient to make organizations leverage this – it calls for smart thinking at process levels and at decision making rooms. Today information overload is almost choking enterprises and to an extent individuals.

Existing architectures centered around RDBMS and modern web services/SOA are not equipped to handle this nature of data, range, volume and complexity. While processing data from very diverse sources, CEP’s can model data, have native AI capabilities that can be applied while processing event streams. With the right executive interventions and decision makings, smart enterprises of the future may have the wherewithal and power to divide and conquer where needed and synthesise various streams of info in real time in the right way. Such a process would unlock the proverbial hold of central decision making within enterprises and allow decentralized autonomous decision making at real time to maintain competitive edge in the marketplace.
This would lead to more and more of adoption and the resultant complexity would force technology to improve more rapidly –the ideal scenario is that the complex mesh that an enterprise is would get more and more opportunity to respond with real time data at all its nodes making a smart enterprise implementing armed with such solutions to be a veritable force in the marketplace. There is no more the luxury of building cubes and waiting for analyses when real time forces can torpedo any strategic plan with rapid killer response from market forces – that’s where CEP fits in very well – providing real time digitally unified view of various scenarios and their overall effects – enabling much more rapid response in real time. Remember the saying –“the CEO can’t even blink for a moment”
When you look at the key google labs projects (courtesy – CIO magazine) – you cant ignore the Google Goggles project or for that matter, Google’s voice to message to voice translation network -these are all precursors of complex things processed in CEP world –took google as an example as this company wants to solve really big and complex problems as part of its charter!
Real business problems seeking new technology solutions and emerging technology solutions maturing to serve increasingly complex business problems are the perfect way to catalyze growth in new/emerging areas and here we see that playing together well. Forward looking plans, simulation, optimization are all a direct derivatives of good CEP solutions and with such solutions deployed well – we can see the emergence of new digital nervous system inside enterprises and in some cases can trigger creation of new business models itself.

Cloud Computing & IT Spending!

In continuation of the previous note:

Involved discussions on cloud invariably turn towards the question: would embracing cloud help business bring down IT spend? This question assumes more relevance given the fast early adoptors are seen to be increasing and IT spend by various counts could be seen to be flat to marginally higher numbers in the coming quarters.

Today cloud vendors seem to mostly follow the path of playing the volume game to make their money – very different from conventional approaches. In most of the cases, early adopters in the past, used to spend at higher costs to gain the first mover advantage in business and therby recoup the costs, while the technology matures and begins to be offered at lower prices. But doay, partly the confusion is sown by some vendors when they boldly proclaim huge cuts in IT spend with the adoption of the cloud. I think that this may be true in a few cases but this is hardly the way to push cloud computing. Why not, one may ask.

As recorded by Joe Mckendrick, at a recent panel at Interop, AT&T’s Joe Weinman, raised doubts about the sustainability of cloud computing economics, describing a scenario in which they break down as enterprise management requirements come into play. “I’m not sure there are any unit-cost advantages that are sustainable among large enterprises,” he said. He expects adoption of external cloud computing in some areas, and private capabilities for others.

The belief is that in the immediate future as investments get made in establishing infrastructure by cloud service providers and business setting up private clouds, there may be a surge of spending but with more sophisticated infrastructure management technologies, cost of automation may come down just as net virtualization benefits would begin to outweigh the costs incurred substantially.

Let’s look at the processing power acceleration inside the data centers this game never ceases to progress and so we will at any point in time see more and more investment to embrace the latest technology – the corollary is that business begins to see better and better usage of the progress but as we have seen in the last 25 years amidst this cycle , the spend in absolute terms have not come down at all. All of us know that more than higher % of virtualization is a pipedream even in the best of the circumstances. A new hardware refresh cycle looks to be a sold case as cloud computing gains more and more traction. Data centers are in constant race to improve performance, flexibility and delivery – in turn opening more opportunities for new software/services. All these will begin to reflect in additional IT spending.

All of us recognize the explosion of information and data in our day-to-day life and business experience similar explosion of data. From handling a few terabytes of data years back, today we are seeing the top 5 internet giants turning out several hundreds terabytes of data annually. The other day a friend of mine told me how he got reports about a smartphone launch push communication almost clogging the internet bandwidth some time earlier this quarter. This means that while unit cost level saving may provide for savings on paper, the information complexity would begin to leverage more and more of technologies like cloud – imagine every business beginning to expect/invest in cloud based analytics – the infrastructure needed to support such initiatives could become significant. Leading VC's like Evangelos are rolling the red carpet for investments in creating such companies that can cater to this pent up need. Now extend the argument further –more and more data followed by more and more complex realtime analytics – some skeptics may say - this is a never ending game of pushing the frontier – they may well be right in a way, but this could become the way of doing business in this century.

Like BI being a big ticket/focus item inside enterprises today, we shall potentially see more and more of analytics in the cloud becoming a very large component of cloud activity/spend landscape. David Linthicum notes a trend toward much of the innovation around leveraging larger amounts of structured and unstructured data for business intelligence, or general business operations, coming from innovation in the cloud, and not traditional on-premise software moving up to the cloud. This is a shift. Considering this trend and the fact that cloud providers provide scalability on-demand could be the one-two punch that sends much of our business data to cloud computing platforms.

Let’s look at this from a different plane. Business will begin to leverage advancements in cloud to seek more and more differentiated edge for themselves – this means the small efficient biz can really compete well against large inefficient business and win in real circumstances. The winner would spend more to maintain the edge and the responsible lagging business would imitate/surpass the enabling winning model and this means more and more cloud/IT involvement is a foregone conclusion. Maximizing benefits would in general give way to pull backs in IT enablement in an expanding business scenario. Cost, efficiency & flexibility leverage for improving productivity would be self serving for business but this would trigger more and more incentives to keep stretching this approach, in turn fuelling more leverage of IT enablement. In a dream situation , IT enablement must release locked –in value from investments and release more and more for funding future strategic initiatives, and directly help in creating new business models, facilitating new business channels etc.
The Bottomline : Expect more and more business leverage of IT and ironically, with clouds , this may not be through less IT but more and more benefits may flow out of engaging IT lot more. Just in case, business begins to demand more bang for the buck - the scenario looks plausible : Few mega vendors may dominate the space and they would again be at each other's throat promising low cost -high value computing benefits to the CIO - But i still believe the benefit frontiers would keep shifting forcing business to commit more and more IT leverage!

The Cloud Adoption : Early Signs

Was in a closed door meet with some customers, influencers analyzing the state of the cloud computing market and the future directions including key players maturity, monetization and derivative impacts throughout the cloud landscape. The clear consensus was that we are set to enter a stage where increasing levels of outsourcing would be pointed towards cloud service providers.

Let's begin from the beginning : the definition of what cloud computing is seems to be still evolving – Infrastructure, Platfrom, Service – occasionally, all seem to be interchangeably used and the overlap in terms of what they can offer seems to be more pronounced with more discussions about their capabilities and demonstrated early success stories. Consensus is that phenomenal growth is expected in the on-demand infrastructure and platform markets.

The vendors are getting more and more confident about growth and adoption. The market leaders are beginning to report rapid adoption from customers across industries, regions and spanning varied size and supporting high degree of volume of transactions. One of the most important driver pushing this adoption seems to stemming out of the desire from business to look at SaaS and beyond into the cloud based mostly on considerations of cost takeouts and in a few cases of the desire to be the early adopters into the cloud.Considerations centering around change management, integration, migration of data, security rejig inside the enterprises, IT staff training, business push to move faster and scale ad infiniteum with cloud all become key areas to be managed as the cloud gets into the mainstream and are definitive factors in the adoption curve. Two things are common in this adoption.

Short-term, overflow or burst capacity to supplement on-premise assets – it appears that this may well be the primary use case for as-a-service cloud platforms currently in the market today. Some consider this to be a step of stop-gap hosting arrangement and for some this is an opportunity for cloud based services and applications adding on to existing on-premise stack with additional capabilities.


The adoption is not smooth just like a flip of a switch. Sophisticated users are beginning to weigh private clouds(hybrid), discussions about multi –tenancy, single instancing all are getting more and more attention and in some cases, already sowing the seeds for a religious discussions therein. Many users are beginning to assess the relevance of the cloud in respect of ability to integrate with communities, ability to share existing environments with other players, ability to integrate third party apps, look at ways of using and paying for real-time capacity, storage and compute usages etc .In terms of top-of-mind recollection and from the experience pool , it appears that Google, Amazon & SFDC are the frontrunners . Azure platform is beginning to rear its head in the discussions albeit occasionally, while everyone admitted that Microsoft one day or other will get to become a strong force to contend with. Some shared a statistic : Google may have more than 10 million paid users, more than half-a-million developers have courted the Amazon web services platform and SFDC has more than a million subscribers trying to leverage the Force.com platform. The surprising part is that more and more adoption, like the proverbial law, work fills to cover the time, more and more cloud centric offers would keep coming to eat the budget – so be assured no budget shrinks are going to happen in the short-to-medium term while capabilities begin to increase.

How Much Information & Overload Phenomenon!

The Global Information Industry of the UCSD has put together statistics about how much information gets consumed by Americans every day. Two previous studies, by Peter Lyman and Hal Varian in 2000 and 2003, analyzed the quantity of original content created, rather than what was consumed. A more recent study measured consumption, but estimated that only .3 zettabytes were consumed worldwide in 2007.
In 2008, Americans consumed information for about 1.3 trillion hours, an average of almost 12 hours per day. Consumption totaled 3.6 zettabytes and 10,845 trillion words, corresponding to 100,500 words and 34 gigabytes for an average person on an average day. A zettabyte is 10 to the 21st power bytes, a million million gigabytes. Hours of information consumption grew at 2.6 percent per year from 1980 to 2008, due to a combination of population growth and increasing hours per capita, from 7.4 to 11.8. More surprising is that information consumption in bytes increased at only 5.4 percent per year. Yet the capacity to process data has been driven by Moore's Law, rising at least 30 percent per year. One reason for the slow growth in bytes is that color TV changed little over that period. High-definition TV is increasing the number of bytes in TV programs, but slowly. Additional stats to note:
The traditional media of radio and TV still dominate our consumption per day, with a total of 60 percent of the hours. In total, more than three-quarters of U.S. households' information time is spent with non-computer sources. Despite this, computers have had major effects on some aspects of information consumption. In the past, information consumption was overwhelmingly passive, with telephone being the only interactive medium. Thanks to computers, a full third of words and more than half of bytes are now received interactively. Reading, which was in decline due to the growth of television, tripled from 1980 to 2008, because it is the overwhelmingly preferred way to receive words on the Internet.
a. The average person spends 2 hours a day on the computer
b. 100,000 words are read each day.
Clearly, in this world, there is just too much of information. Notwithstanding the normal channels such as email, friends to stay in touch with, meetings to attend to, research papers that one needs to stay on top of, books to read, Tivo recordings and so forth. With Social media, RSS feeds, one could have literally hundreds of posts each day from highly selective sources coming through feed aggregators. To make matters worse, people are now finding ways to take non-textual informational sources such as audio files (e.g. podcasting) and even video files and tying that to RSS. A large fraction of the people one interacts with are walking around with their eyes glazed, seemingly on auto-pilot, speaking a mile a minute about this blog they read, this documentary they Tivo'd, this video they saw on the Net, this new startup company that's hot.
The Rise of Interaction : Most sources of information in the past were consumed passively. Listening to music on the radio, for example, does not require any interaction beyond selecting a channel, nor any attention thereafter. Telephone calls were the only interactive form of information, and they are only 5 percent of words and a negligible fraction of bytes. However, the arrival of home computers has dramatically changed this as computer games are highly interactive. Most home computer programs (such as writing or working with user generated content) are as well. Arguably, web use is also highly interactive, with multiple decisions each minute about what to click on next. 1/3rd of information is today received interactively.. This is an overwhelming transformation, and it is not surprising if it causes some cognitive changes. These changes may not all be good, but they will be widespread.

Private Clouds : Real & Relevant?

An interesting thread is now on within the enterprise irregulars group on what constitutes private clouds –as again very enlightened discussion therein. The issue that I want to talk about is if private cloud do indeed exist, then what is their adoption path ? Lets start from the beginning : the issue is can we can use the term ”cloud” for describing the changes that happen inside IT architectures within enterprise? Thought there can be no definitive answer – a series of transition to a new order of things, will in my opinion, become imminent.

The pressures on IT & the engulfing sense of change in the IT landscape are hard to overlook. The The pressures would mean more begin to seriously look at SaaS, re-negotiating license terms, rapid adoption of virtualization etc. As part of this and beyond, internal IT would be forced more and more to show more bang for the buck and it is my view that organizations would begin to look more and more to question committed costs and begin to aggressively look at attacking them more systematically – earlier sporadic efforts marked their endeavors. This could also unlock additional resources that could potentially go towards funding new initiatives. There are enough number of enterprises going this route and their service partners are also in some cases prodding them to go this way.

The change in many senses may make IT inside enterprises to look , behave and perform like cloud computing providers – though there would be limitations( in most places serious) on scale, usage assessments , security and the like. There are strong incentives propelling enterprises to channel their efforts and investments over the next few years in mimicking a private cloud service architecture that gets managed by them internally. This could well become their approach of staging towards finally embracing the cloud(public) over a period of time . These baby steps to nearly full blown efforts are needed in preparing organizations to embrace clouds and it may not be feasible at all to make the shift from on-premise to cloud like flip switch. Serious licensing issues, maturity, lack of readiness, integration concerns, security all come in the way of enterprises looking at public cloud in a holistic way. These steps need not be looked down – they would very well become the foundation to move into public clouds in a big way.

Lets think through this : setting up private cloud is a motherhood statement at best( in many organizational surveys, one can find setting private clouds is not in the CIO’s top three priorities – if anything virtualization finds a place-) to make this happen in a credible way means re-examining most parts of IT functioning and business –IT relationship inside enterprises. Most elements of the bedrock gets affected – the processes, culture, metrics, performance, funding, service levels etc. Well thought out frameworks, roadmaps need to be put in place to make this transition successful. These frameworks need to cater not only to setting up internal cloud but eventually help in embracing the public cloud over the years- not an easy task as it appears. A few of those organizations that master this transition may also look at making business out of these – so it’s a journey – that needs to be travelled onto embracing public clouds. Some business may take a staged approach and call it by private cloud, internal cloud or whatever but eventually the road may lead into public clouds!

Analog Dollars Onto Digital Dimes : Challenging Transitions

I was in a meeting with a CIO in New York on Friday where he outlined the onslaught of disruptive innovation in his industry and the broad set of challenges faced therein. The magnitude of the change that his organization needed to face was mindboggling! Just before, I entered to join the meet in the lounge saw this very interesting interview over the television. Jeff Zucker, President and CEO of NBC Universal, has succinctly captured what is going on in the media industry, with his widely quoted comment that the new digital business models are turning media revenues from analog dollars into digital dimes. As he puts it, the key problem for established media companies is thus to come up with new, innovative business models that will enable them to make money and survive as their industry accelerates the transition from a business model based mostly on analog dollars into one based mostly on digital dimes. The ongoing tussle on online advertising is yet another example of the pain transitions bring on.

What a difficult/interesting situation to be in : Lets see how typically leading enterprises face issues centered around disruptive models where the effects –positive if handled well can be far reaching and can turn disastrous when affected negatively. How do well run business handle the shift ? In my view, while it is tempting to look at acting all things material in a startup mode to tide over the situation – reality shows that this would be next to impossible to make it happen. Would-be innovators know that one of their biggest challenges is systematically identifying the innovations with the greatest likelihood of creating disruptive growth. Pick the wrong one, and squander a year or more of focus and investment. It is not by luck based on the draw anymore. By conducting a series of diagnostics, companies in any industry can quickly identify the most promising opportunities. By conducting customer, portfolio, and competitor diagnostics to pinpoint the highest-potential opportunities and the best business models for bringing them to market. Successful approach to make a positive spin around the disruption could be centered around figuring out how to leverage the new innovations to evolve its organization, culture and business models into the future. Too often solution points revolve around assessment and leverage of skills and capabilities of the enterprise in adapting to new changed realities. The challenge would be in figuring out how to leverage existing relationships and expand the adoption quotient across the enterprise ecosystem. A judicious assessment of what part of existing organization could be useful and what needs to ripped, replaced or retired need to be carried out in the most objective manner. Preparing leadership internally to get ready to manage the disruption is a prerequisite to succeed,

In reality, these seemingly ordinary questions/concerns may become the most tough one to crack, but no point in trying to avoid facing reality. In many cases, matter of fact cultural impediments may overshadow other set of concerns in managing the transition. There is no other way forward – except to properly manage the delicate balance between winning, excelling in its current operations, and quickly embracing disruptive innovations and lay a solid plan for growth.

IT Savvy Means Getting An Edge From IT

As IT's importance grows inside organizations, with more competition and concerns about ROI and BVIT, pressures on resourcing, offshoring strategies and heightened sense of expectations from IT by business, all enterprises undergo such changes and an appropriate framework with a three year rolling plan perspective for IT strategy and planning is absolutely essential for any medium sized to large sized IT user organizations.

Just read this nice interview in the WSJ of MIT’s Peter Weill on IT Savvy, his (excellent) recent book, co-authored with Jeanne Ross. A nice interview –in essence this covers the main ideas of the book, standardization for innovation, IT as strategic asset vs. liability, creating digital platforms, and the importance of connecting projects. A couple of excerpts:

BUSINESS INSIGHT:Your newest book is about IT-savvy companies. How do you define IT savvy?
DR. WEILL: IT-savvy companies make information technology a strategic asset. The opposite of a strategic asset, of course, is a strategic liability. And there are many companies who feel their IT is a strategic liability. In those companies, the IT landscape is siloed, expensive and slow to change, and managers can't get the data they want.
IT-savvy companies are just the opposite. They use their technology not only to reduce costs today by standardizing and digitizing their core processes, but the information they summarize from that gives them ideas about where to innovate in the future. A third element is that IT-savvy companies use their digital platform to collaborate with other companies in their ecosystem of customers and suppliers.
So, IT-savvy companies are not just about savvy IT departments. It's about the whole company thinking digitally.
BUSINESS INSIGHT: You've done some research that suggests IT-savvy companies are more profitable than others. Tell me a bit about that.
DR. WEILL: The IT-savvy companies are 21% more profitable than non-IT-savvy companies. And the profitability shows up in two ways. One is that IT-savvy companies have identified the best way to run their core day-to-day processes. Think about UPS or Southwest Airlines or Amazon: They run those core processes flawlessly, 24 hours a day.
The second thing is that IT-savvy companies are faster to market with new products and services that are add-ons, because their innovations are so much easier to integrate than in a company with siloed technology architecture, where you have to glue together everything and test it and make sure that it all works. We call that the agility paradox—the companies that have more standardized and digitized business processes are faster to market and get more revenue from new products.
Those are the two sources of their greater profitability: lower costs for running existing business processes, and faster innovation.
DR. WEILL: The real secret to IT-savvy companies is that each project links together—like Lego blocks—to create a reusable platform. IT-savvy companies think reuse first. When they have a new idea, the first question they ask is: Can we use existing data, applications and infrastructure to get that idea to market fast? When we look at the impact of reusing processes and applications, we see measurable benefits in the top and bottom lines.


The book also covers defining your operating model, revamping your IT funding model, allocating decision rights and accountability, driving value from IT and leading an IT Savvy firm.

This is a book highly regarded by the cognoscenti and it starts by asking what does being IT savvy mean and answers as the ability to use IT to consistently improve firm performance. The book encapsulates very powerful observations and statements that matter:

- You have to stop thinking about IT as a set of solutions and start thinking about integration and standardization. Because IT does integration and standardization well.

- IT Savvy firms have 20% higher margins than their competitors.

- An operating model is a pre-requisite before committing sound investments in IT

- IT funding is important, as systems become the firm's legacy that influence, constrain or dictate how business processes are performed. IT funding decision are long term strategic decision that implement the operating model

IT Savvy is based on three main ideas with some commentary from the reviewer.

1- Fix what is broken about IT, which concentrates on having a clear vision on how IT will support business operations and a well-understood funding model. In most places, IT is delegated and benignly neglected in the enterprise with disastrous consequences of underperformance/poor leverage.

2- Build a digitized platform to standardize and automate the processes that are not going to change so focus shifts on the elements that do change. The platform idea is a powerful one and can drive significant margin, operational and strategic advantage.

3- Exploit the platform for growth by focusing on leading organization changes that drive value from the platform. With a platform built for scale, leverage efficiencies that scale can deliver. Ironically many enterprises fail to do one of these two!

Don’t miss the IT Savvy assessment methodology and over all a very important book to be must read.

The Cloud Monetization Paradigm.

Too often when large enterprises begin to look at number justification to move onto the cloud, many seasoned executives are struck by the limited enterprise class billing choices that the model is able to provide – Granted this is an evolving space, the urgency in getting this done can’t be understated.

Let’s look at the basics :

The Advantages of cloud are far too well known :

-Pay subscription on usage
-Enables usage of the current version and focus is on continuous service
-Economies of scale and anytime, anywhere, ubiquitous use

There are clear upside to the whole cloud horizon when we are able to make determinations on issues around providing options to customer /providers in terms of

-Flexible pricing – spectrum from subscriptions to pay-per-use
-High Volume – Low touch sales model
-Elastic scaling – ability to scale up and down

When existing business computing models gets disrupted and order of magnitude business performance improvements are looked at, paradoxically the cloud paradigm offers both unmatched opportunity and competitive challenges for software and services providers. But seen from a customer perspective, too often the thought chain gets dimmed when the actual issue of metering and billing gets reviewed by customers. It transpires that as the mindshare of cloud picks up among the IT buyers and the various players begin to align we see that too often the locus of value creation is shifting within the Cloud ecosystem and with it the balance of power between service providers and channel partners. This is huge change in the ecosystem. The reality is that successfully transition to SaaS will affect every aspect of the company, including key operational systems such as billing and payments. 

The complexity in providing multiple levels of flexibility to customers in metering and billing are far too complex in real life and warrants careful determination of choices. Billing, administration and related marketing capabilities are essential in meeting the new operational challenges of a Software-as-a-Service business. More often than not the new models inject potential conflicts/role clarity amongst service providers, channel partners and in the process as Saugatuck brings out raises very fundamental questions like :

-Who owns the customer?
- What is the best way to compensate channel partners?
- Is the partner paid a commission for managing a customer that the vendor owns?
- Or does the partner own the customer and pay a revenue share?
- Can the partner up sell its own offerings and keep it all?

Obviously there are many possible answers to these questions and some answers are beginning to appear and in this direction the landmark study helps finds answers by taking a detail oriented view to evaluation options. Emergence of cloud based billing solutions reflects the fast pace at which things are evolving and the report notes that Cloud billing solutions have targeted Web providers of content, online gaming, virtual worlds, media and entertainment – primarily consumer offerings – as well as SaaS, SaaS enablement, PaaS, business media, business content and other online business solutions. On-premise solutions have moved into a variety of other verticals and segments, such as media and entertainment, financial services, transportation, SaaS, SaaS enablement and Cloud Infrastructure.

I think we will get an ideal monetization view across the cloud ecosystem in a few years from now but for early adopters we need more choices and more analyses and here’s one helping the progress.

Services, Innovation & Competitiveness

Just came across the report from Royal Society, - the UK's national academy of science, - Hidden Wealth: the contribution of science to service sector innovation.
Innovation to services and its linkage to hard sciences happens to be one of the core focus area of this report. This excellent report presents the findings of a major study on the role of science, technology, engineering and mathematics (STEM) to innovation in the services sector. One of the pioneering and comprehensive investigations of the impact assessment of services innovation in the current age . Why is service sector economy so important? Let’s look at the significance of this : Over 2/3rd of the global economy (GDP)is centered around services with developed countries seeing a greater percentage of service sector contribution to their national economies. By far the single largest employing sector in the developed world, the significance of service industry cant be overlooked. It goes without saying that the competitiveness of service sector along with government policies and implementation efficiency would to a great extent determine the competitiveness of the nation.
The study revealed that STEM is omnipresent in the service sector, but, unlike the case in the industrial sector, its impact is rarely recognized. The Hidden Wealth study, points out that despite the great significance of services to global economy, their nature remains somewhat undefined and is invisible under a majority of circumstances !
Employment and economic vibrancy in developed countries hinges on service sector and hence this is an important body of study and the findings very relevant and useful to developed nations in large measures and by extension to global economy.

The report reinforces the traditionally held view about service sector :"Our main conclusion, . . . is that services are very like to remain central to the new economy, not least because we are at or near a tipping point: innovations now underway seem likely to change dramatically the way we live and to generate many services (though few can be predicted in detail at present). . . ."

"Scientific and technological developments have triggered and brought about major transformations in services industries and public services, with the advent of the internet and world-wide-web . However, the full extent of STEM's current contribution is hidden from view - it is not easily visible to those outside the process and is consequently under-appreciated by the service sector, policymakers and the academic research community. This blind spot threatens to hinder the development of effective innovation policies and the development of new business models and practices in the UK."

Services rely significantly on external STEM capabilities to support or stimulate innovation - indeed they tend to innovate in more external and interdependent ways than manufacturing firms. This can take the form of bought-in expertise or technology and collaborations with suppliers, service users, consultants or the public science base. Through these ‘open innovation’ models, service organisations respond to the knowledge, science, and technology that they see being developed elsewhere (eg in other companies or universities), and use collaboration to solve their problems in innovative and competitive ways.”
The Hidden Wealth report eloquently makes the case that most of the economic value of breakthroughs in science and technology will likely be realized through services, and thus, this is the area around which most new, high paying jobs will be created. In conclusion:
“The main message of this study is that the contribution of STEM to service innovation is not an historic legacy, nor simply a matter of the provision of ‘human capital - important as the latter may be. STEM provides invaluable perspectives and tools that will help to nurture emergent service models and define future generations of services for the benefit of businesses, government, and citizens.”

Customers : Major Force In Innovation!

While trying to find answers for questions like what kinds of innovation will best help us to emerge from recession stronger than we went in, to gain market share, or to thrive in new and exciting markets, the GT report finds that the U.S. is the easiest country to create innovative products, services and business. Different geographical regions have varying degree of emphasis on innovation. Innovation in general is regarded as a top corporate priority by more executives in Western Europe (41 per cent) than in North America (33 per cent) or in Asia Pacific (31 per cent), while Asia Pacific places higher emphasis on customer led innovation . One of the key finding is the role of customers, along with the attitude of companies to their customers, emerges as a defining characteristic in the survey. The report notes "No longer simply passive recipients of goods and services, customers now help to shape the future of their own consumption." They are now the leading source of innovations globally (41 per cent), more important than anything inside companies, including research and development. Clearly, co-creation seems to be driving more value.

On innovation, the report draws four main conclusions:

- Pay more attention to what your customers say and their ideas for innovations
- Consider expanding your open innovation projects and working with more third parties
- Look outwards, explore new markets, rather than drawing back into your domestic market
- Capitalise on the great shifts of the age, the move away from carbon-based energy and the emergence of China and India as major trading nations with huge consumer markets.

As I wrote here, in this age of contribution economy – a phenomenon that we are seeing ever since the Internet started to connect everyone to everyone else all the time, people from around the world can more easily contribute leading to exploding results - caused by the coming together of energy, ideas, and knowledge. Some of the more familiar examples of these collaborative efforts include blogs, open-source software, podcasts, and even the nonprofit online encyclopedia Wikipedia. We are also seeing customers leading the charge of innovation and the economist article on user led innovation exemplifies a new form of collaboration. The rise of online communities, together with the development of powerful and easy-to-use design tools, seems to be boosting the phenomenon, as well as bringing it to the attention of a wider audience

Replicating from MySQL to Drizzle and Beyond

Drizzle is one of the really great pieces of technology to emerge from the MySQL diaspora--a lightweight, scalable, and pluggable database for web applications. I am therefore delighted that Marcus Erikkson has published a patch to Tungsten that allows replication from MySQL to Drizzle. He's also working on implementing Drizzle-to-Drizzle support, which will be very exciting.

Marcus has submitted the patch to us and I have reviewed the code. It's quite supportable, so I plan to integrate it as soon as we are done with our next Tungsten release, which will post around 5 November. You will be able to build and run it using our new community builds.

This brings up a question--what about replicating from MySQL to PostgreSQL? What about other databases? I get the PostgreSQL replication question fairly often but it may be a while before our in-house team can implement plug-in support for it. Anybody want to submit a patch in the meantime? Post in the Tungsten forums if you have ideas and need help to get the work done. Tungsten Replicator code is very modular and it is not hard to add new database support.

Meanwhile, go Marcus!!

Community Builds for Tungsten Clustering

It's been almost two months since I have posted anything on the Scale-Out Blog, as our entire team has been heads-down working on Tungsten. We now have a number of accomplishments that are worth writing articles about. Item one on that list is community builds for Tungsten clusters.

Tungsten community builds offer a bone-simple process to check out and build Tungsten clustering software. The result is a fully integrated package that includes replication, management, monitoring, and SQL routing. The community builds work for MySQL 5.0 and 5.1 and also allow you to set up basic replication from MySQL to Oracle.

Community builds do not include much logic for autonomic management, including automated failover and sophisticated rules that keep databases up and running rain or shine. Those and other features like floating IP address support are part of the commercial Tungsten software. PostgreSQL and Oracle-to-Oracle support is also commercial only at least for the time being.

Community builds do include our standard installation process, which allows you to set up a working cluster a few minutes. You can back up and restore datebases, check liveness of cluster members, failover master databases for maintenance and a lot of other handy features. There is also full documentation, located here.

To get started, you need a host running Mac OS X, Linux, or Solaris that meets the following prerequisites. On Linux you can usually satisfy these requirements using Yum or Apt-get if the required software is not already there.
  • Java JDK 1.5 or higher.
  • Ant 1.7.0 or higher for builds
  • Subversion. We use version 1.6.1
  • MySQL 5.0 or 5.1 (only on hosts where cluster is installed)
  • Ruby 1.8.5 or greater (only on hosts where cluster is installed)
Now you can grab the software and do a build. Make a work directory, cd to it, and enter the following commands. (Due to truncation on the blog the SVN URL looks a little funny. Don't be fooled.)
svn checkout \
https://tungsten.svn.sourceforge.net/\
svnroot/tungsten/trunk/community

cd community
./release-community.sh # (Press ENTER when prompted)
The release-community.sh script checks out most of the Tungsten code for you and does a build. IMPORTANT NOTE: The command shown above builds SVN HEAD, which means you may have a life of adventure. You can also build off branches which are more or less stable. Look at the available config files in the community directory.

After the build finishes, you have ready-to-install clustering software. You can scp the resulting tar.gz file out to another host or just cd directly into the build itself as shown below and run the configure script, which sets up Tungsten software on a single host.
cd build/tungsten-community-2009-1.2
./configure
You may need to read the manuals so you get all the answers right. The installation manual is posted here at www.continuent.com. You'll also need to look at the Replication Guide, Chapter 2 to see how to set up MySQL properly. We'll do that automatically in the future, but for now it's help yourself. (Don't worry: the database set-up is easy.)

To make the cluster interesting you should install on at least a couple of hosts. Here's what an installed cluster looks like using the Tungsten cluster control (cctrl) program.
[tungsten@centos5a tungsten-community-2009-1.2]$ tungsten-manager/bin/cctrl
[LOGICAL] /cluster/comm/> ls

COORDINATOR[centos5a:MANUAL]

ROUTERS:
+-----------------------------------------------------------------------+
|NONE |
+-----------------------------------------------------------------------+

DATASOURCES:
+-----------------------------------------------------------------------+
|centos5a(master:ONLINE, progress=3) |
+-----------------------------------------------------------------------+
| REPLICATOR(role=master, state=ONLINE) |
| DATASERVER(state=ONLINE) |
+-----------------------------------------------------------------------+

+-----------------------------------------------------------------------+
|centos5b(slave:ONLINE, progress=3, latency=0.0) |
+-----------------------------------------------------------------------+
| REPLICATOR(role=slave, master=centos5a, state=ONLINE) |
| DATASERVER(state=ONLINE) |
+-----------------------------------------------------------------------+
Starting from scratch and pulling code from SourceForge, it takes me about 30 minutes to get to an installed cluster with two nodes. At this point you have access to a very powerful set of tools to protect data, keep your databases available, and scale performance. Look at the manuals. Try it out. If you have questions or feedback, post them in the Tungsten forums. In the meantime, have fun with your database cluster.

p.s., We will post binary builds next week. The current build is in final release checks, so you may notice a few problems--I hit a Ruby warning on configuration that will be fixed shortly.

R&D And Innovation In A Downturn : A Misnomer?

I have always followed Booz Allen’s annual innovation study for the last several years and have written about it as well here, here etc.. This year’s study looks refreshing and asks the question how do smart companies spending good money &effort on innovation approach spending when the economy is down? The answer: Most of the leading innovators are actually accelerating their innovation efforts in the recession – in doing so they are actually betting that customers will demand new products and services as the economy gets out of the downturn. Many of the corporate decision makers in the list who control/manage budgets for innovation – running into hundreds of billion dollars of business every year believe that innovation is critical as they prepare for the upturn, and a majority have maintained or expanded their portfolios and are pursuing new products to improve growth and margins. Practically everyone wants to be as focused and result oriented as could be The recent trends, observed over the last few years suggest that as a rule, companies are performing less pure and applied research. Instead, they are concentrating their R&D budgets on product development and engineering. The software sector is now brimming with new strength on SaaS & Cloud centric products and services while behemoths like SAP struggling to maintain growth and leadership! Apple had the best quarters when the global economy was perhaps at its lowest ebb last few quarters!
I found it interesting to see an IBM executive’s statement suggesting that in a recession or downturn "...that really drives a need for innovation and a level of creativity that you might no otherwise have in normal times." I thought that the new normal is that normalcy is a product of bygone era! The reality is that last two decades have seen more changes in growth /slowdown and indications are that this may become the new normal – so downturn or upswing ought to marginally influence yearly spend on innovation – particularly when product development cycles can run into multiple years. If anything the sensitivity could prove itself in reverse direction of spending more/differently in a downturn, Read Applied material’s dilemma – the revenues are down significantly this year. Yet it’s customers continue to demand innovative new products to maintain their own competitive positions. The stronger companies want to stay on the same innovation page so that at the end of the cycle, they have a stronger competitive position. Taking a long view of economy, markets and relating to the context are done differently by different companies. The top three spenders on R&D in the world happen to be non-american companies! Matter of fact in many industries today, predicting the economy for the next few years and its implication for them may become the most coveted job and an art for its executive board. In fact, I would think that not being able to predict and adjust to an ever changing economic environment may become a disadvantage for any enterprise – being able to constantly test the assumptions about economic environment may have to become a top item for corporate strategy planning, adjusting internally and to succeed in the marketplace.



Lets look at how this would impact executives inside the organization. Business needs to look at talent horizontally across the business classified as per the roles and not necessarily as per the organizational norms and this would be the key in effectively harnessing talent inside enterprises. The differences ought to be expressed in terms of talent valuation - that is, such attributes as knowledge, experience, skills, and personal interaction capabilities - and not in terms of organizational structures (such as business units) or in human resources management terms (such as age, education, seniority, or compensation). In an age where the rules and procedures of an organization can be an obstacle to segmentation and a force for “averaging” the treatment of individuals’ roles, and in a situation where organizations need to offer very specialized services, definitely a radical new look at the way talent gets categorized, nurtured and reviewed are called for and on an ongoing basis, needs to be reviewed to provide a definitively fresh and dynamic approach. After all winning the people war is a crucial determinant of success for any organization. In most of the knowledge business, the future values of enterprises are centered on building seemingly intangible assets vs the conventional measures of capital assets. That’s where good, capable people, well aligned team, well conceived strategy and top quality leadership matters. With all these in place, the cutting edge would from innovation - in all its forms starting from management innovation to disruptive innovation to innovation of the incremental kind. The ability to recruit/ mould the leaders that will be able to create the future innovations that will make enterprise more successful is a major responsibility & talent management in many ways will determine the organization’s growth as much as the overall business strategies will. This will shape organizations in a significant way and if organizations get this right along with purposeful innovation models, competitiveness increases and success would follow.

The New Face Of The CIO

The responsibilities of the CIO span a spectrum of managerial tasks, with one end of the spectrum as "supply" - the delivery of IT resources and services to support business functions - and the other end of the spectrum as "demand" -the task of helping the business innovate through its use of technology. Many CIOs admit that balancing both demand and supply is a difficult task. Fortunately, the CIO has a range of new opportunities and tools to help him manage and order these competing priorities. The process starts with an understanding of how new sourcing models can liberate internal resources and funding for strategic business enablement and innovation.

While every CIO plans aligning IT and business strategy, the irony is that they don't have enough time for effective strategic planning. Usually they blame it on demand-side pressures.Look at the challenges confronting the CIO:
The business side complains that their CIOs aren't up to speed on issues confronting the business and can't think through the implications of systems trade-offs, on a business-unit level, for planned implementations or proposed IT investments. At the same time, the business side usually gets confused in making assessments of the relevance of new technologies to safeguard their business competitiveness. More often than not, business leaders say that their CIOs are not proactively bringing them new ideas about how technology can help them compete more effectively.

Part of the problem stems from the inherent conflict of managing supply and shaping demand. CIOs often must meet requirements to reduce total IT spending, for instance, while making investments to support future scenarios-even though these upgrades will increase IT operating costs. It's indeed a tough job - trying to be both a cost cutter and an innovator - and the CIO sometimes compromises one role. Structural issues whereby parts of the organization are under the control of other executives also complicate the job. Business-unit leaders want more IT leadership, but they are wary of CIOs who don't tread carefully along business leaders' boundaries. Strategic IT management calls for making improvements on the demand side. Managing the demand side of the equation broadly covers:
- The financial understanding of costs and benefits,
- Business accountability for IT and
- Clear framework for investments in technologies.
CIOs shift their attention to different aspects of these three core components. As part of the evolution the CIOs shift focus: once operations are stabilized and business credibility has been achieved, emphasis shifts toward working more closely with business leading to opportunities to contribute to strategic initiatives and direction.

In practice, it can be seen that CIOs who meet and exceed business expectations get rewarded with greater participation in their enterprise's business strategy, higher budgets and become favorites with the business side. In most cases, these CIOs tend to have the ear of the CEO through a direct reporting relationship. CIOs need to know not only what the differences are but also how to time the shift; move too soon or too late and credibility with business leaders will suffer.


This month IBM released its findings from the new global study of more than 2,500 chief information officers (CIOs), covering 19 industruesindustries and spread across 78 countries. The study confirms the strategic role played by CIO’s in making their business become visionary leaders of innovation and financial growth. Many CIO’s are getting much more actively engaged in setting strategy, enabling flexibility and change, and solving business problems, not just IT problems

The report replete with innumerable insights is an excellent collection and I started by looking at understanding some themes and associated metrics that preoccupy the CIO’s the most . I was startled to find that more and more CIO’s appear to be genuinely focusing on getting the growth lever of IT and business fire by rightly turning their attention in increased measures towards innovation. Someone quips overtime the role of the CIO is less and less about technology and more and more about strategy. Really hitting the nail on the head. As the role of the CIO itself transforms so do the types of projects they lead across their enterprises, which will allow CIOs to focus less time and resources on running internal infrastructure, and more time on transformation to help their companies grow revenue. CIOs are transforming their infrastructure to focus more on innovation and business value, rather than simply running IT. The report finds that today’s CIOs spend an impressive 55 percent of their time on activities that spur innovation. These activities include generating buy-in for innovative plans, implementing new technologies and managing non-technology business issues. The remaining 45 percent is spent on essential, more traditional CIO tasks related to managing the ongoing technology environment. This includes reducing IT costs, mitigating enterprise risks and leveraging automation to lower costs elsewhere in the business. Obviously not every CIO would make the cut. It’s reported that High-growth CIOs actively integrate business and IT across the organization 94 percent more often than Low-growth CIOs. The study notes that CIOs spend about 20 percent of their time creating and generating buy-in for innovative plans. But High-growth CIOs do certain things more often than Low-growth CIOs: they co-create innovation with the business, proactively suggest better ways to use data and encourage innovation through awards and recognition. 56 percent of High-growth CIOs use third-party business or IT services, versus 46 percent of Low-growth CIOs. The study also found that High-growth CIOs actively use collaboration and partnering technology within the IT organization 60 percent more often than Low-growth CIOs. Even more impressive, High-growth CIOs used such technology for the entire organization 86 percent more often than Low-growth CIOs
Successful CIO’s , the report notes actually blend three pairs of roles. At any given time, a CIO is:
• An Insightful Visionary and an Able Pragmatist
• A Savvy Value Creator and a Relentless Cost Cutter
• A Collaborative Business Leader and an Inspiring IT Manager


Adjusting the mix one pair at a time, the study reports make the CIO’s perform tasks that make innovation real, raise the ROI of IT and expand the business impact.

Other key findings of the survey:
• CIOs are continuing on the path to dramatically lower energy costs, with 78 percent undergoing or planning virtualization projects
• 76 percent of CIOs anticipate building a strongly centralized infrastructure in the next five years.
IBM's CIO Pat Toole has this to say about the findings. In addition to the detailed personal feedback, IBM also incorporated financial metrics and detailed statistical analysis into the findings.The report also highlights a number of recommendations from strategic business actions and use of key technologies that IBM has identified that CIOs can implement, based on CIO feedback from the study.


(Picture Courtesy :IBM)

The Product Maintenance Paradox!

There was a mild tremor in the IT market few weeks back when Siemens threatened to terminate its annual maintenance contract with SAP owing to high price being charged by SAP. I have noted four years back that around half of Oracle's total revenue of $11.8 billion(in 2005)came from maintenance. As a matter-of-fact oracle sometimes used to justify its roll-up strategy based on attractive maintenance revenue streams of acquired products! The current maintenance stream revenue for Oracle continues to be very high. These open the questions :Why do customers pay vendors annual maintenance? What do they get in return to justify this substantial outgo year after year? For end customers, switching software providers is not easy. Most experts and customers agree that replacing big applications from one company with those from another is far too costly (millions at minimum) and time consuming to be worth the bother.
Now Wall street seems to be waking up to the fragile nature of the solid natured maintenance revenue stream of enterprise software vendors.

Courtesy Vinnie saw this CGCowen report written by Peter Goldmacher, Joe del Callar dated Sep 25 - focusing on the license revenue stream of enterprise software vendors.The report highlights

“As we interpret this data, we can't tell if vendors aren’t investing in R&D because customers aren.t buying products, or customers aren’t buying products because vendors aren.t investing in R&D. Regardless of the cause and effect, the trend is clear. Absolute investments in ERP are down dramatically, way beyond the synergies created by scale via M&A. The net result is that customers no longer look at material ERP upgrades as a competitive advantage and therefore vendors are unwilling to increase investments. Customers are watching their ERP vendors generate expanding margins without plowing that profit back into the product and we believe customers are getting resentful”



I certainly believe that the time is now ripe for the Tier I enterprise software market to be disrupted by third-party maintenance providers. With SAP and Oracle said to be realizing gross margins in the neighborhood of 90% on their maintenance business, the economics are simply too strong for third party maintenance providers not to rise up. Some regulatory interventions like antitrust suits may help accelerate the shift,this would embolden service providers to look at the maintenance market form a fresh value perspective.

I must also agree that with the moderate success of some pure play third party service providers, the landscape is changing and customer expectations are increasing and we shall begin to see third party supports coming in and several customers are also demanding annual lease contracts in place of perpetual license – at least in emerging economies. The era of enterprise software as we know may be over. The straightforward calculations about existing customers continuing to pay very high maintenance revenue year after year may prove to be wrong moving forward and in fact may choose to converge into one homogeneous platform and look at saving lot more costs. The real test of one's ability to hang onto customers will not come when maintenance contracts expire but when the major software companies, transition to so-called "service oriented architectures," a fundamental change in the way applications are deployed, integrated and accessed which should accentuate the ability of different vendors to provide ongoing maintenance support and create a new vibrant win-win business therein.

The Coming Video Revolution & The Internet Buildup


Nielsen reported few weeks back that on an average Americans watch 153 hours of TV every month and 3 hours of online video every month. Video ads are beginning to appear in newspapers! Video has a powerful impact, it is easy to share on social platforms, it is much less expensive to produce that in the past, and more and more people have access to it as technology and reach are improving.

Cisco’s recent forecast of IP traffic for the coming years, makes an interesting read. The company expects IP traffic to increase fivefold compared to today, reaching two-thirds of a zettabyte by 2013. That's two thirds of a trillion gigabytes with a compound annual growth rate of 40 percent. In 2013, the Internet will be nearly four times larger than it is in 2009. By year-end 2013, the equivalent of 10 billion DVDs will cross the Internet each month.

An important highlight of the Cisco report : A significant part of that traffic will be online videos, which will make up 90 percent of all consumer IP traffic by 2013, reaching over 18 exabytes per month. Internet video is now approximately one-third of all consumer Internet traffic, not including the amount of video exchanged through P2P file sharing. The sum of all forms of video (TV, video on demand, Internet, and P2P) will account for over 91 percent of global consumer traffic by 2013. Internet video alone will account for over 60 percent of all consumer Internettraffic in 2013.Video communications will be a formidable block in this traffic – Cisco anticipates an increase by around ten times by 2013 (in the next four years). As the global mobile explosion continues, it reflects in mobile video volume as well. Mobile online video watching is also expected to rise spectacularly, reaching 64 percent of the total mobile IP traffic in 2013, growing from 33 petabytes in a month in 2008 to an estimated 2,184 petabytes, this is 2 exabytes, per month in 2013. The increase represents a 131 percent annual growth rate. However mobile IP traffic will still be only 4 percent of the total IP traffic.

The volume growth also factors in expected growth in Peer-to-peer traffic, which is also expected to increase but its percentage of the overall IP traffic will decline by 2013. P2P networks currently use 3.3 exabytes per month and the numbers are expected to grow at a compound annual growth rate of 18 percent. However, overall P2P traffic will decline to only 20 percent of the total consumer IP traffic coming from 50 percent in 2008. Cisco expects file hosting services to grow instead at a much bigger rate of 58 percent per year reaching 3.2 exabytes per month in 2013.

The internet continues to grow. With more and more news of a global economic turnaround, the growth can only accelarate. The support Infrastructure needs to grow faster to keep pace and this seems to be happening. Since 2007, the annual growth rate of international Internet capacity has exceeded 60%. In 2009, international internet bandwidth increased 64%. In 2009, network operators added 9.4Tbps of new capacity—exceeding the 8.7Tbps in existence just two years earlier.No wonder the internet buildout phenomenon is indeed an amazing one


(Picture courtesy : Cisco)

Consumerization Inside Enterprises Reinforces The Idea : IT is Business

When more and more focus is put on innovation, its evolution, growth and in managing innovation while looking through what conventional collaborative mechanism in fusion with powerful mechanisms like internet enabled collaboration could help achieve –all these point to a world of immense possibilities. With a dominant number of internet users poised to take a dip in the virtual world, the virtual world could become more and more real!! Apple and the high tech semicon industry can vouch for the pull from the consumer segment – for both of them, consumer segment happens to be the largest consuming class!

The interesting part is that the consumerization of IT is creating a whole new world, all managed by a new set of rules. The impact of consumerization on enterprise and opportunities to leverage such advances are all groomed in the consumer space itself. The transition of such things into enterprise IT thereby happens automatically – in a way, advances in consumer space dictates the corresponding fallout in the enterprise space. Many of the digital collaboration mechanisms are made available at throwaway prices today. This creates so much pressure inside enterprises such that the IT departments are forced to give corporate users access to the scale and innovation of the consumer market. True, but difficult to believe – right? An analysis of the past shows that in a significant number of cases the technologies that were originally focused on consumer space have made deep impact over time on the enterprise space – Personal computers, search, IM all are shining examples of this powerful trend. Native web companies keep coming out with a lot of full blown but trial offerings that entices lot many more consumers and many a times a revenue and utilization value evolves out of more and more usage of such offerings. In the process, the consumer space gets more and richer forcing successful offering(s) to be pushed into the enterprise –in larger numbers and faster pace.

Consumerization opens up the organization to consumer-grade services that innovate at a much faster pace than the organization can. –providing in the process, an unmatchable potential for handsome returns to business. Productivity could rise as workers become less tied to the office. consumerisation is also forcing massive changes in resource consumption - consumerization offers a path to reducing a company’s carbon footprint by encouraging telecommuting and Internet-based applications run by mega-scale server farms, which are in many cases powered by greener energy sources and are more energy efficient than hardware in corporate data centers. Large corporates are beginning to adopt such technologies aggressively. With an impending explosive growth of communication and broadband capabilities, the medium of virtual reality/world is sure to take a central seat. Clearly the virtual reality movement does not appear to be a fad per se but can help business create and define new frontiers in its growth path. Implementing IT consumerization is not a major technical challenge, but it does need effective organizational change management discipline . What should the CXO’s do in such contexts: Beat the status quo. Break any resistance that comes from outsourcing partners who come in the way of faster adoption of consumer technologies inside the enterprise. By definition, consumerization is at odds with the notion of paying such high or fixed costs and, therefore, is perceived to be against outsourcers’ interests. .Embrace such technologies faster and in innovative ways align them to their business growth plans. Consumer technologies are not a taboo to be shunned - these need to be constantly assessed for their potential for innovative leverage in growing business. This could end up forcing a larger role for IT in Business further reinforcing the idea that IT is Business.

Dell To Buy Perot Systems : Too Little Too Late!

Dell plans to acquire Perot systems. The momentum picks up! Dell expects this deal to position itself as a more formidable player parading both its hardware and services expertise given the fast changing nature of the business and the faster adoption of cloud computing. Dell was as always seen as a laggard when it to comes to services and given that its principal competitors – HP & IBM are now very big in services, Dell had to anyway bite the bullet of acquiring some big service player. Perot’s strengths are mostly in Banking, Financial Services, Healthcare & Government and would to a limited extent help Dell directly with its footprint. The capabilities of Perot system may be more useful to Dell compared to its current customer base. Perot systems customers would have to factor in the new reality of Perot systems ownership changing to Dell, though the existing CEO would continue to run the business. Two things struck me:

A. Dell must have acquired a services company at least two - three years back when it confronted serious growth troubles – At around the same time, HP muscled in to acquire EDS. I predicted that HP may buy EDS in years before ithappened.

B. Perot has limited scale compared to the other global service players and India headquartered service players. Perot’s offshore capability also is generally seen to be quite limited compared to other traditional global players. Valuation looks interesting here: 2.8 billion USD revenue gets a valuation of 3.9USD billion after providing a substantial premium to last quoted trade.

I am not too sure if this move by Dell would perturb IBM or HP given the lack of scale of Perot's operations, while this may give some limited upside to Dell. The corporate integration may get accomplished easily given that both companies are headquartered in Texas. I was actually expecting Dell to make a serious move to get into smartphone market - say by acquiring Palm. It may happen in the future - we will have to wait and see. It would be interesting to see how Oracle (which recently acquired Sun) looks at this development. I am very keen to watch what Cisco does now – it has entered into the more competitive server business (big competition to Dell , IBM, HP) and has more ambitions in unified computing. Cisco cash position and appetite for acquisition is well known and in the recent past there had been rumors of Cisco looking at acquiring Accenture.While am not clear about how this acquisition may decisively benefit Dell, I do believe that Dell’s move may trigger a new momentum in Cisco’s next acquisition move as well!



Update : See Phil's and Vinnie's perspective as well.

The Changing Enterprise Software Momentum

Changing economic sentiments, and diminished IPO market create the perfect storm for the Big 4 MISO - (Microsoft, IBM, SAP, and Oracle)as they see the medium/small sized vendors are beginning to make M&A moves. See this:

-> Adobe acquires Omniture

-> CA buys NetQoS

-> VMware buying SpringSource

-> PE players scooping Skype

-> Intuit buying startup Mint

- >Avaya buys parts of Nortel


Clearly momentum is slowly beginning to hit the M&A circuit - the backdrop has been that this year has seen very few deals compared to last few years averages. Some trends at work that shape the thinking on why the immediate future could see more and more of continued consolidation. Sramana Mitra assesses the prospects of some players. One may ask - why think of enterprise software when consumer technologies and internet infra/app players are getting more attention. As I wrote here, the consumerization of enterprise technology has the potential to open up new powerful combinations. The possibilities of such fusion of different worlds may open up good chances for disruptive innovation - this provides a platform for such an ideal fertile ground that can lead up to a potential business model innovation – so enterprises need to be well prepared to capitalize on such possibilities. What should the CXO’s do in such contexts: Embrace such technologies faster and in innovative ways align them to their business growth plans. Consumer technologies are not a taboo to be shunned - these need to be constantly assessed for their potential for innovative leverage in growing business.

Strategic acquisitions target vendors with new product presence/ strong recurring revenue streams in well established areas. The cognoscenti keep whispering that large maintenance revenues as an area for potential targets. Look at Oracle’s reported numbers for this quarter: GAAP new software license revenues were down 17%; software license updates and product support was up 6%.Nurturing a profitable and recurring revenue stream will allow many vendors to share overall development and support costs as they weather the next storm. The hunt is on for vendors who fit this bill as megavendors and private equity actively chase after these assets.

Weaker companies would see much lowered publicly traded vendor valuations. For companies on the prowl for acquisition with a target class, its never been cheaper and easier to acquire a competitor. Most P/E ratio have become quite attractive and fall below the standard 2X to 3X revenue price target.

International market expansion. the larger vendors express tremendous interest in acquiring new distribution channels, micro industry verticals, and new geographical coverage. Many in the MISO ecosystem see a lot of turbulence and rumors of M&A run fierce as the partners consolidate to gain scale for regional and global delivery.

Most privately held vendor exit strategies revolve around acquisition not IPO. Many firms with IPO plans have been told by their boards to refocus on revenue growth and partnerships. The intention - use partnership success to both drive revenue growth and attract acquisition by a larger vendor. Many see acquisition by the Big 4 as the best exit strategy at this point in time.

Newer delivery models get more and more acceptance : If we analyze the standalone new sales numbers we may get to see this trend clearly. MISO on-premise license revenues may keep dropping/stagnating and in some cases record very moderate growth (in specialized areas). The ERP refresh rates may slowly begin to show a downward trend! SaaS and cloud players need to and i would believe will expand their presence beyond FAS,CRM & HCM spaces - this may also include on premise players getting to offer SaaS solutions in niche areas like procurement, PLM etc. Cloud computing models would over time get to become more popular with ease of use, quick implementation times, pay as you go, no infrastructure model a la google or salesforce and are in fact seeing more faster adoptions.

Net-Net : Despite the expectations of a slow moving economy, end users should assume that the biggest vendors/ faster moving niche vendors will continue their torrid pace of acquisitions. As these acquisitions factor into long term apps strategies and planning for 2010 purchases, users must assume that truly specialized solutions with significant industry footprint will be acquired. Many customers recommend niche vendors that they work with to the megavendors to acquire them so that their investments are deemed to be safe.

Tungsten Replicator 1.0.3 Release

Tungsten Replicator version 1.0.3 is now released and available as a download from Source Forge. Tungsten Replicator provides advanced, platform-independent replication for MySQL 5.0/5.1 with global transaction IDs, crash-safe slaves, flexible filtering, and built-in consistency checking. The 1.0.3 release adds backup and restore, which I described in a previous blog article.

In addition, there are numerous small feature editions and some great bug fixes that raise performance and stability for large-scale deployment. For example, the replicator now goes online in seconds even when there are millions of rows in the history table. This fixes our previous go-online performance which was, er, pretty slow. Thanks to our users in the Continuent forums for helping us to track down this problem as well as several others.

As of the 1.0.3 release we are also starting to offer the enterprise documentation for the open source replicator. I think this provides better documentation all around, not least of all because we can do a better job of maintaining a single copy. Get current replicator documentation here.