Proving Master/Slave Clusters Work and Learning along the Way

2009 has been a big year for Tungsten. In January we had (barely) working replication for MySQL. It had some neat features like global IDs and event filters, but to be frank you needed imagination to see the real value. Since then, Tungsten has grown into a full-blown database clustering solution capable of handling a wide range of user problems. Here are just a few of the features we completed over the course of the year:
  • Autonomic cluster management using business rules to implement auto-discovery of new databases, failover, and quick recovery from failures
  • Built-in broadcast monitoring of databases and replicators
  • Integrated backup and restore operations
  • Pluggable replication management, proven by clustering implementations based on PostgreSQL Warm Standby and Londiste
  • Multiple routing mechanisms to provide seamless failover and load balancing of SQL
  • Last, but not least, simple command line installation to configure and start Tungsten in minutes
You can see the results in our latest release, Tungsten 1.2.1, which comes in both open source and commercial flavors. (See our downloads page to get software as well as documentation.)

In the latter part of 2009 we also worked through our first round of customer deployments, which was an adventure but helped Tungsten grow enormously. Along the way, we confirmed a number of hunches and learned some completely new lessons.
  • Hardware is changing the database game. In particular, performance improvements are shifting clustering in the direction of loosely coupled master/slave replication rather than tightly coupled multi-master approaches. As I laid out in a previous article, the problem space is shifting from database performance to availability, data protection, and utilization.
  • Software as a Service (SaaS) is an important driver for replication technology. Not only is the SaaS sector growing, but even small SaaS applications can result in complex database topologies that need parallel, bi-directional, and cross-site replication, among other features. SaaS business economics tend to drive building these systems on open source databases like MySQL and PostgreSQL. By supporting SaaS, you support many other applications as well.
  • Cluster management is hard but worthwhile. Building distributed management with no single points-of-failure is a challenging problem and probably the place where Tungsten still has the most work to do. Once you get it working, though, it's like magic. We have been focused on trying to make management procedures not just simple but wherever possible to do away with them completely by making the cluster self-managing.
  • Business rules rock. We picked the DROOLS rule engine to help control Tungsten and make it automatically reconfigure itself when data sources appear or fail. The result has been an incredibly flexible system that is easy to diagnose and extend. Just one example: floating IP address support for master databases took 2 hours to implement using a couple of new rules that work alongside the existing rule set. If you are not familiar with rules technology, there is still time to make a New Year's resolution to learn it in 2010. It's powerful stuff.
  • Clustering has to be transparent. I mean really transparent. We were in denial on this subject before we started to work closely with ISPs, where you don't have the luxury of asking people to change code. Tungsten Replicator is now close to a drop-in replacement for MySQL replication as result. We also implemented proxying based on packet inspection rather than translation and re-execution to raise throughput and reduce incompatibilities visible to applications.
  • Ship integrated, easy-to-use solutions. We made the mistake of releasing Tungsten into open source as a set of components that users had to integrate themselves. We have since recanted. As penance we now ship fully integrated clusters with simple installation procedures even for open source editions and are steadily extending the installations to cover not just our own software but also database and network configuration.
Beyond the features and learning experiences the real accomplishment of 2009 was to prove that integrated master/slave clusters can solve a wide range of problems from data protection to HA to performance scaling. In fact, what we have implemented actually works a lot better than I expected when we began to design the system back in 2007. (In case this sounds like a lack of faith, plausible ideas do not not always work in the clustering field.) If you have not tried Tungsten, download it and see if you share my opinion.

Finally, keep watching Tungsten in 2010. We are a long way from running out of ideas for making Tungsten both more capable and easier to use. It's going to be a great year.

Smart Techology For The Smart Enterprise

The web is abuzz with analyses on Google’s real time search effectiveness assessed in the context of the death of an Hollywood actress today. The question there is how real time is realtime? Google has demonstrated through algorithmic means(without human intervention), the capability to report results in a search query in less than 3 mts(Read Google’s Matt Cutt’s comments therein!) after the reporting of the event! What an amazing progress shown by Google here!

Recently, Morgan Stanley brought out the fact that the mobile Internet is ramping faster than desktop Internet did, and believes more users may connect to the Internet via mobile devices than desktop PCs within 5 years – this is pushing things hard for service providers. Massive mobile data growth is driving transitions for carriers and equipment providers. This is slated to increase over time with emerging markets embracing mobile technologies much faster and pushes the material potential for mobile Internet user growth. Low penetration of fixed-line telephone and already vibrant mobile value-added services mean that for many EM users and SMEs, the Internet will be mobile.
As the MIT journal notes it, both Google and Microsoft are racing to add more real-time information to their search results, and a slew of startups are developing technology to collect and deliver the freshest information from around the Web. But there's more to the real-time Web than just microblogging posts, social network updates, and up-to-the-minute news stories. Huge volumes of data are generated, behind the scenes, every time a person watches a video, clicks on an ad, or performs just about any other action online. And if this user-generated data can be processed rapidly, it could provide new ways to tailor the content on a website, in close to real time.
Many different approaches are possible here to realize this opportunity – currently many web companies already use analytics to optimize their content throughout the course of a day. The aggregation sites , for example, tweak the layout on their home page by monitoring the popularity of different articles. But traditionally, information has been collected, stored, and then analyzed afterward. The MIT journal article correctly highlights that using seconds-old data to tailor content automatically is the next step. In particular, a lot of the information generated in real-time relates to advertising. Real-time applications, whether using traditional database technology or Hadoop, stand to become much more sophisticated going forward. "When people say real-time Web today, they have a narrow view of it--consumer applications like Twitter, Facebook, and a little bit of search," Startups are also beginning to look at different technologies to capture and everage on current data and patterns therein.
Last month, in a very nice article, The Economist highlights, thanks to Moore’s law (a doubling of capacity every 18 months or so), chips, sensors and radio devices have become so small and cheap that they can be embedded virtually anywhere.

Today, two-thirds of new products already come with some electronics built in. By 2017 there could be 7 trillion wirelessly connected devices and objects—about 1,000 per person. Sensors and chips will produce huge amounts of data. And IT systems are becoming powerful enough to analyse them in real time and predict how things will evolve.
Building on the smart grids space – wherein colossal waste in transmission , distribution and consumption could get positively reduced, the Economist notes that it is in big cities that “smartification” will have the most impact. A plethora of systems can be made more intelligent and then combined into a “system of systems”: not just transport and the power grid, but public safety, water supply and even health care (think remote monitoring of patients). With the help of Cisco, another big IT firm, the South Korean city of Incheon aims to become a “Smart+Connected” community, with virtual government services, green energy services and intelligent buildings.

If one were to look at a different place –inside the enterprises, collaboration is enabling people coming together more easily and readily than before – but the upside potential there is very high with possibilities like workflow getting baked natively into documents enabling real time communication amongst the stakeholders.

What is that technology which would enable all this to happen covering data, transaction, content, collaboration, streaming etc inside smart enterprises? It is Complex Event Processing or CEP, a technology in transition- hallmark of any maturing technology and more importantly its potential – as a precursor to enabling enterprises to get ready for more sophistication in autonomous realtime decision making..


At its elements – CEP collects data from various sources and prime raw events & then applies smart algorithms(this learns with time) and invokes right set of rules to decipher patterns in real time – complex scenarios and distills trends therein and churns out results enabling real time decision making. Smart enterprises can ill afford not having such technologies deployed internally. IBM calls this smart planet revolution- all tech majors ranging from software players to infrastructure players want to have a decisive play herein. Every utility service, all competitive sectors like transport and retails etc are looking at adopting this technology aggressively in areas like supply chain management, finance optimization etc.
But smart infrastructure alone won’t be sufficient to make organizations leverage this – it calls for smart thinking at process levels and at decision making rooms. Today information overload is almost choking enterprises and to an extent individuals.

Existing architectures centered around RDBMS and modern web services/SOA are not equipped to handle this nature of data, range, volume and complexity. While processing data from very diverse sources, CEP’s can model data, have native AI capabilities that can be applied while processing event streams. With the right executive interventions and decision makings, smart enterprises of the future may have the wherewithal and power to divide and conquer where needed and synthesise various streams of info in real time in the right way. Such a process would unlock the proverbial hold of central decision making within enterprises and allow decentralized autonomous decision making at real time to maintain competitive edge in the marketplace.
This would lead to more and more of adoption and the resultant complexity would force technology to improve more rapidly –the ideal scenario is that the complex mesh that an enterprise is would get more and more opportunity to respond with real time data at all its nodes making a smart enterprise implementing armed with such solutions to be a veritable force in the marketplace. There is no more the luxury of building cubes and waiting for analyses when real time forces can torpedo any strategic plan with rapid killer response from market forces – that’s where CEP fits in very well – providing real time digitally unified view of various scenarios and their overall effects – enabling much more rapid response in real time. Remember the saying –“the CEO can’t even blink for a moment”
When you look at the key google labs projects (courtesy – CIO magazine) – you cant ignore the Google Goggles project or for that matter, Google’s voice to message to voice translation network -these are all precursors of complex things processed in CEP world –took google as an example as this company wants to solve really big and complex problems as part of its charter!
Real business problems seeking new technology solutions and emerging technology solutions maturing to serve increasingly complex business problems are the perfect way to catalyze growth in new/emerging areas and here we see that playing together well. Forward looking plans, simulation, optimization are all a direct derivatives of good CEP solutions and with such solutions deployed well – we can see the emergence of new digital nervous system inside enterprises and in some cases can trigger creation of new business models itself.

Cloud Computing & IT Spending!

In continuation of the previous note:

Involved discussions on cloud invariably turn towards the question: would embracing cloud help business bring down IT spend? This question assumes more relevance given the fast early adoptors are seen to be increasing and IT spend by various counts could be seen to be flat to marginally higher numbers in the coming quarters.

Today cloud vendors seem to mostly follow the path of playing the volume game to make their money – very different from conventional approaches. In most of the cases, early adopters in the past, used to spend at higher costs to gain the first mover advantage in business and therby recoup the costs, while the technology matures and begins to be offered at lower prices. But doay, partly the confusion is sown by some vendors when they boldly proclaim huge cuts in IT spend with the adoption of the cloud. I think that this may be true in a few cases but this is hardly the way to push cloud computing. Why not, one may ask.

As recorded by Joe Mckendrick, at a recent panel at Interop, AT&T’s Joe Weinman, raised doubts about the sustainability of cloud computing economics, describing a scenario in which they break down as enterprise management requirements come into play. “I’m not sure there are any unit-cost advantages that are sustainable among large enterprises,” he said. He expects adoption of external cloud computing in some areas, and private capabilities for others.

The belief is that in the immediate future as investments get made in establishing infrastructure by cloud service providers and business setting up private clouds, there may be a surge of spending but with more sophisticated infrastructure management technologies, cost of automation may come down just as net virtualization benefits would begin to outweigh the costs incurred substantially.

Let’s look at the processing power acceleration inside the data centers this game never ceases to progress and so we will at any point in time see more and more investment to embrace the latest technology – the corollary is that business begins to see better and better usage of the progress but as we have seen in the last 25 years amidst this cycle , the spend in absolute terms have not come down at all. All of us know that more than higher % of virtualization is a pipedream even in the best of the circumstances. A new hardware refresh cycle looks to be a sold case as cloud computing gains more and more traction. Data centers are in constant race to improve performance, flexibility and delivery – in turn opening more opportunities for new software/services. All these will begin to reflect in additional IT spending.

All of us recognize the explosion of information and data in our day-to-day life and business experience similar explosion of data. From handling a few terabytes of data years back, today we are seeing the top 5 internet giants turning out several hundreds terabytes of data annually. The other day a friend of mine told me how he got reports about a smartphone launch push communication almost clogging the internet bandwidth some time earlier this quarter. This means that while unit cost level saving may provide for savings on paper, the information complexity would begin to leverage more and more of technologies like cloud – imagine every business beginning to expect/invest in cloud based analytics – the infrastructure needed to support such initiatives could become significant. Leading VC's like Evangelos are rolling the red carpet for investments in creating such companies that can cater to this pent up need. Now extend the argument further –more and more data followed by more and more complex realtime analytics – some skeptics may say - this is a never ending game of pushing the frontier – they may well be right in a way, but this could become the way of doing business in this century.

Like BI being a big ticket/focus item inside enterprises today, we shall potentially see more and more of analytics in the cloud becoming a very large component of cloud activity/spend landscape. David Linthicum notes a trend toward much of the innovation around leveraging larger amounts of structured and unstructured data for business intelligence, or general business operations, coming from innovation in the cloud, and not traditional on-premise software moving up to the cloud. This is a shift. Considering this trend and the fact that cloud providers provide scalability on-demand could be the one-two punch that sends much of our business data to cloud computing platforms.

Let’s look at this from a different plane. Business will begin to leverage advancements in cloud to seek more and more differentiated edge for themselves – this means the small efficient biz can really compete well against large inefficient business and win in real circumstances. The winner would spend more to maintain the edge and the responsible lagging business would imitate/surpass the enabling winning model and this means more and more cloud/IT involvement is a foregone conclusion. Maximizing benefits would in general give way to pull backs in IT enablement in an expanding business scenario. Cost, efficiency & flexibility leverage for improving productivity would be self serving for business but this would trigger more and more incentives to keep stretching this approach, in turn fuelling more leverage of IT enablement. In a dream situation , IT enablement must release locked –in value from investments and release more and more for funding future strategic initiatives, and directly help in creating new business models, facilitating new business channels etc.
The Bottomline : Expect more and more business leverage of IT and ironically, with clouds , this may not be through less IT but more and more benefits may flow out of engaging IT lot more. Just in case, business begins to demand more bang for the buck - the scenario looks plausible : Few mega vendors may dominate the space and they would again be at each other's throat promising low cost -high value computing benefits to the CIO - But i still believe the benefit frontiers would keep shifting forcing business to commit more and more IT leverage!

The Cloud Adoption : Early Signs

Was in a closed door meet with some customers, influencers analyzing the state of the cloud computing market and the future directions including key players maturity, monetization and derivative impacts throughout the cloud landscape. The clear consensus was that we are set to enter a stage where increasing levels of outsourcing would be pointed towards cloud service providers.

Let's begin from the beginning : the definition of what cloud computing is seems to be still evolving – Infrastructure, Platfrom, Service – occasionally, all seem to be interchangeably used and the overlap in terms of what they can offer seems to be more pronounced with more discussions about their capabilities and demonstrated early success stories. Consensus is that phenomenal growth is expected in the on-demand infrastructure and platform markets.

The vendors are getting more and more confident about growth and adoption. The market leaders are beginning to report rapid adoption from customers across industries, regions and spanning varied size and supporting high degree of volume of transactions. One of the most important driver pushing this adoption seems to stemming out of the desire from business to look at SaaS and beyond into the cloud based mostly on considerations of cost takeouts and in a few cases of the desire to be the early adopters into the cloud.Considerations centering around change management, integration, migration of data, security rejig inside the enterprises, IT staff training, business push to move faster and scale ad infiniteum with cloud all become key areas to be managed as the cloud gets into the mainstream and are definitive factors in the adoption curve. Two things are common in this adoption.

Short-term, overflow or burst capacity to supplement on-premise assets – it appears that this may well be the primary use case for as-a-service cloud platforms currently in the market today. Some consider this to be a step of stop-gap hosting arrangement and for some this is an opportunity for cloud based services and applications adding on to existing on-premise stack with additional capabilities.


The adoption is not smooth just like a flip of a switch. Sophisticated users are beginning to weigh private clouds(hybrid), discussions about multi –tenancy, single instancing all are getting more and more attention and in some cases, already sowing the seeds for a religious discussions therein. Many users are beginning to assess the relevance of the cloud in respect of ability to integrate with communities, ability to share existing environments with other players, ability to integrate third party apps, look at ways of using and paying for real-time capacity, storage and compute usages etc .In terms of top-of-mind recollection and from the experience pool , it appears that Google, Amazon & SFDC are the frontrunners . Azure platform is beginning to rear its head in the discussions albeit occasionally, while everyone admitted that Microsoft one day or other will get to become a strong force to contend with. Some shared a statistic : Google may have more than 10 million paid users, more than half-a-million developers have courted the Amazon web services platform and SFDC has more than a million subscribers trying to leverage the Force.com platform. The surprising part is that more and more adoption, like the proverbial law, work fills to cover the time, more and more cloud centric offers would keep coming to eat the budget – so be assured no budget shrinks are going to happen in the short-to-medium term while capabilities begin to increase.

How Much Information & Overload Phenomenon!

The Global Information Industry of the UCSD has put together statistics about how much information gets consumed by Americans every day. Two previous studies, by Peter Lyman and Hal Varian in 2000 and 2003, analyzed the quantity of original content created, rather than what was consumed. A more recent study measured consumption, but estimated that only .3 zettabytes were consumed worldwide in 2007.
In 2008, Americans consumed information for about 1.3 trillion hours, an average of almost 12 hours per day. Consumption totaled 3.6 zettabytes and 10,845 trillion words, corresponding to 100,500 words and 34 gigabytes for an average person on an average day. A zettabyte is 10 to the 21st power bytes, a million million gigabytes. Hours of information consumption grew at 2.6 percent per year from 1980 to 2008, due to a combination of population growth and increasing hours per capita, from 7.4 to 11.8. More surprising is that information consumption in bytes increased at only 5.4 percent per year. Yet the capacity to process data has been driven by Moore's Law, rising at least 30 percent per year. One reason for the slow growth in bytes is that color TV changed little over that period. High-definition TV is increasing the number of bytes in TV programs, but slowly. Additional stats to note:
The traditional media of radio and TV still dominate our consumption per day, with a total of 60 percent of the hours. In total, more than three-quarters of U.S. households' information time is spent with non-computer sources. Despite this, computers have had major effects on some aspects of information consumption. In the past, information consumption was overwhelmingly passive, with telephone being the only interactive medium. Thanks to computers, a full third of words and more than half of bytes are now received interactively. Reading, which was in decline due to the growth of television, tripled from 1980 to 2008, because it is the overwhelmingly preferred way to receive words on the Internet.
a. The average person spends 2 hours a day on the computer
b. 100,000 words are read each day.
Clearly, in this world, there is just too much of information. Notwithstanding the normal channels such as email, friends to stay in touch with, meetings to attend to, research papers that one needs to stay on top of, books to read, Tivo recordings and so forth. With Social media, RSS feeds, one could have literally hundreds of posts each day from highly selective sources coming through feed aggregators. To make matters worse, people are now finding ways to take non-textual informational sources such as audio files (e.g. podcasting) and even video files and tying that to RSS. A large fraction of the people one interacts with are walking around with their eyes glazed, seemingly on auto-pilot, speaking a mile a minute about this blog they read, this documentary they Tivo'd, this video they saw on the Net, this new startup company that's hot.
The Rise of Interaction : Most sources of information in the past were consumed passively. Listening to music on the radio, for example, does not require any interaction beyond selecting a channel, nor any attention thereafter. Telephone calls were the only interactive form of information, and they are only 5 percent of words and a negligible fraction of bytes. However, the arrival of home computers has dramatically changed this as computer games are highly interactive. Most home computer programs (such as writing or working with user generated content) are as well. Arguably, web use is also highly interactive, with multiple decisions each minute about what to click on next. 1/3rd of information is today received interactively.. This is an overwhelming transformation, and it is not surprising if it causes some cognitive changes. These changes may not all be good, but they will be widespread.

Private Clouds : Real & Relevant?

An interesting thread is now on within the enterprise irregulars group on what constitutes private clouds –as again very enlightened discussion therein. The issue that I want to talk about is if private cloud do indeed exist, then what is their adoption path ? Lets start from the beginning : the issue is can we can use the term ”cloud” for describing the changes that happen inside IT architectures within enterprise? Thought there can be no definitive answer – a series of transition to a new order of things, will in my opinion, become imminent.

The pressures on IT & the engulfing sense of change in the IT landscape are hard to overlook. The The pressures would mean more begin to seriously look at SaaS, re-negotiating license terms, rapid adoption of virtualization etc. As part of this and beyond, internal IT would be forced more and more to show more bang for the buck and it is my view that organizations would begin to look more and more to question committed costs and begin to aggressively look at attacking them more systematically – earlier sporadic efforts marked their endeavors. This could also unlock additional resources that could potentially go towards funding new initiatives. There are enough number of enterprises going this route and their service partners are also in some cases prodding them to go this way.

The change in many senses may make IT inside enterprises to look , behave and perform like cloud computing providers – though there would be limitations( in most places serious) on scale, usage assessments , security and the like. There are strong incentives propelling enterprises to channel their efforts and investments over the next few years in mimicking a private cloud service architecture that gets managed by them internally. This could well become their approach of staging towards finally embracing the cloud(public) over a period of time . These baby steps to nearly full blown efforts are needed in preparing organizations to embrace clouds and it may not be feasible at all to make the shift from on-premise to cloud like flip switch. Serious licensing issues, maturity, lack of readiness, integration concerns, security all come in the way of enterprises looking at public cloud in a holistic way. These steps need not be looked down – they would very well become the foundation to move into public clouds in a big way.

Lets think through this : setting up private cloud is a motherhood statement at best( in many organizational surveys, one can find setting private clouds is not in the CIO’s top three priorities – if anything virtualization finds a place-) to make this happen in a credible way means re-examining most parts of IT functioning and business –IT relationship inside enterprises. Most elements of the bedrock gets affected – the processes, culture, metrics, performance, funding, service levels etc. Well thought out frameworks, roadmaps need to be put in place to make this transition successful. These frameworks need to cater not only to setting up internal cloud but eventually help in embracing the public cloud over the years- not an easy task as it appears. A few of those organizations that master this transition may also look at making business out of these – so it’s a journey – that needs to be travelled onto embracing public clouds. Some business may take a staged approach and call it by private cloud, internal cloud or whatever but eventually the road may lead into public clouds!

Analog Dollars Onto Digital Dimes : Challenging Transitions

I was in a meeting with a CIO in New York on Friday where he outlined the onslaught of disruptive innovation in his industry and the broad set of challenges faced therein. The magnitude of the change that his organization needed to face was mindboggling! Just before, I entered to join the meet in the lounge saw this very interesting interview over the television. Jeff Zucker, President and CEO of NBC Universal, has succinctly captured what is going on in the media industry, with his widely quoted comment that the new digital business models are turning media revenues from analog dollars into digital dimes. As he puts it, the key problem for established media companies is thus to come up with new, innovative business models that will enable them to make money and survive as their industry accelerates the transition from a business model based mostly on analog dollars into one based mostly on digital dimes. The ongoing tussle on online advertising is yet another example of the pain transitions bring on.

What a difficult/interesting situation to be in : Lets see how typically leading enterprises face issues centered around disruptive models where the effects –positive if handled well can be far reaching and can turn disastrous when affected negatively. How do well run business handle the shift ? In my view, while it is tempting to look at acting all things material in a startup mode to tide over the situation – reality shows that this would be next to impossible to make it happen. Would-be innovators know that one of their biggest challenges is systematically identifying the innovations with the greatest likelihood of creating disruptive growth. Pick the wrong one, and squander a year or more of focus and investment. It is not by luck based on the draw anymore. By conducting a series of diagnostics, companies in any industry can quickly identify the most promising opportunities. By conducting customer, portfolio, and competitor diagnostics to pinpoint the highest-potential opportunities and the best business models for bringing them to market. Successful approach to make a positive spin around the disruption could be centered around figuring out how to leverage the new innovations to evolve its organization, culture and business models into the future. Too often solution points revolve around assessment and leverage of skills and capabilities of the enterprise in adapting to new changed realities. The challenge would be in figuring out how to leverage existing relationships and expand the adoption quotient across the enterprise ecosystem. A judicious assessment of what part of existing organization could be useful and what needs to ripped, replaced or retired need to be carried out in the most objective manner. Preparing leadership internally to get ready to manage the disruption is a prerequisite to succeed,

In reality, these seemingly ordinary questions/concerns may become the most tough one to crack, but no point in trying to avoid facing reality. In many cases, matter of fact cultural impediments may overshadow other set of concerns in managing the transition. There is no other way forward – except to properly manage the delicate balance between winning, excelling in its current operations, and quickly embracing disruptive innovations and lay a solid plan for growth.

IT Savvy Means Getting An Edge From IT

As IT's importance grows inside organizations, with more competition and concerns about ROI and BVIT, pressures on resourcing, offshoring strategies and heightened sense of expectations from IT by business, all enterprises undergo such changes and an appropriate framework with a three year rolling plan perspective for IT strategy and planning is absolutely essential for any medium sized to large sized IT user organizations.

Just read this nice interview in the WSJ of MIT’s Peter Weill on IT Savvy, his (excellent) recent book, co-authored with Jeanne Ross. A nice interview –in essence this covers the main ideas of the book, standardization for innovation, IT as strategic asset vs. liability, creating digital platforms, and the importance of connecting projects. A couple of excerpts:

BUSINESS INSIGHT:Your newest book is about IT-savvy companies. How do you define IT savvy?
DR. WEILL: IT-savvy companies make information technology a strategic asset. The opposite of a strategic asset, of course, is a strategic liability. And there are many companies who feel their IT is a strategic liability. In those companies, the IT landscape is siloed, expensive and slow to change, and managers can't get the data they want.
IT-savvy companies are just the opposite. They use their technology not only to reduce costs today by standardizing and digitizing their core processes, but the information they summarize from that gives them ideas about where to innovate in the future. A third element is that IT-savvy companies use their digital platform to collaborate with other companies in their ecosystem of customers and suppliers.
So, IT-savvy companies are not just about savvy IT departments. It's about the whole company thinking digitally.
BUSINESS INSIGHT: You've done some research that suggests IT-savvy companies are more profitable than others. Tell me a bit about that.
DR. WEILL: The IT-savvy companies are 21% more profitable than non-IT-savvy companies. And the profitability shows up in two ways. One is that IT-savvy companies have identified the best way to run their core day-to-day processes. Think about UPS or Southwest Airlines or Amazon: They run those core processes flawlessly, 24 hours a day.
The second thing is that IT-savvy companies are faster to market with new products and services that are add-ons, because their innovations are so much easier to integrate than in a company with siloed technology architecture, where you have to glue together everything and test it and make sure that it all works. We call that the agility paradox—the companies that have more standardized and digitized business processes are faster to market and get more revenue from new products.
Those are the two sources of their greater profitability: lower costs for running existing business processes, and faster innovation.
DR. WEILL: The real secret to IT-savvy companies is that each project links together—like Lego blocks—to create a reusable platform. IT-savvy companies think reuse first. When they have a new idea, the first question they ask is: Can we use existing data, applications and infrastructure to get that idea to market fast? When we look at the impact of reusing processes and applications, we see measurable benefits in the top and bottom lines.


The book also covers defining your operating model, revamping your IT funding model, allocating decision rights and accountability, driving value from IT and leading an IT Savvy firm.

This is a book highly regarded by the cognoscenti and it starts by asking what does being IT savvy mean and answers as the ability to use IT to consistently improve firm performance. The book encapsulates very powerful observations and statements that matter:

- You have to stop thinking about IT as a set of solutions and start thinking about integration and standardization. Because IT does integration and standardization well.

- IT Savvy firms have 20% higher margins than their competitors.

- An operating model is a pre-requisite before committing sound investments in IT

- IT funding is important, as systems become the firm's legacy that influence, constrain or dictate how business processes are performed. IT funding decision are long term strategic decision that implement the operating model

IT Savvy is based on three main ideas with some commentary from the reviewer.

1- Fix what is broken about IT, which concentrates on having a clear vision on how IT will support business operations and a well-understood funding model. In most places, IT is delegated and benignly neglected in the enterprise with disastrous consequences of underperformance/poor leverage.

2- Build a digitized platform to standardize and automate the processes that are not going to change so focus shifts on the elements that do change. The platform idea is a powerful one and can drive significant margin, operational and strategic advantage.

3- Exploit the platform for growth by focusing on leading organization changes that drive value from the platform. With a platform built for scale, leverage efficiencies that scale can deliver. Ironically many enterprises fail to do one of these two!

Don’t miss the IT Savvy assessment methodology and over all a very important book to be must read.