This is part 1 of a multi-part series on the virtualization revolutions no one is really talking about. If you enjoy my longer blarticles, make a cup of coffee; you won't be disappointed by this one.
Why Enterprise Software is Not Dead
I’d like to show you something I just drew on the back of a napkin:
“You
know.. for VCs!” (imagine me doing my best Tim-Robbins-ala-Hudsucker-Proxy voice here).
This
drawing is actually something I have been predicting for a while now. And recently
I seem to be observing it happening at an increasing rate. Enough so that I
believe its going to radically change the way we think about the software
market.
All
that in two lines and one virtual napkin – how can you not read on?
New Kids on the Block
Basically
the scribble represents an argument of old versus new. For a few years now, VCs
(and many entrepreneurs) have been increasingly repeating the new software
investment mantra “We don’t do enterprise software; we only do SaaS.” So much
so that you’d have a harder time finding a classic enterprise software
investment (CES) on Sand Hill than you would a WGA member at the Golden Globes.
These days, most VCs politely decline your request for a meeting when you fess
up and tell them you’re coming over to drink their coffee and pitch them a CES
investment. It’s the equivalent of trying to pitch “tools” in 1999. So this is the debate: Which is better,
SaaS or CES? And no, the title of this blarticle actually doesn’t give the
answer away. Nice try!
Before
I wade into it let me say that I think I have an interesting perspective in
which to enter into this debate. The most recent software company I founded,
Newmerix is a CES company by all accounts. It says “Change
management for packaged applications (SAP, Oracle EBS, and PeopleSoft)” somewhere
on the very first slide in our VC deck. That’s like bold-facing and
24-point-fonting the “classic” in CES. Before Newmerix though I co-founded,
built, and sold a company called Service Metrics. Back in the day we would have
called that business an ASP (Application Service Provider) but now-a-days the
kids are calling it SaaS. So I’ve played for both teams and I watch the current
tides turning towards SaaS only investments with intent fascination.
Proof of Life
That
said, things change quickly in the technology (and thus venture) world. I’m
here to argue that SaaS may not be as good an investment as everyone thinks,
enterprise software is definitely not dead, and in 5 years it won’t make a
difference one way or the other.
If
You Don’t Have Something Nice to Say..
Let
me start off by playing my own devil’s advocate and state why I think VCs are
so gung ho for SaaS investments. The reality is that SaaS has exposed (and
perhaps solved) a lot of the problems with CES investments. If we set aside
some of the penguins-on-the-iceberg mentality about why people invest in SaaS,
I think most VCs would probably say one or more of the following things about
their affection for these types of investments right now:
1) An annuity model is more attractive to the company and the buyer
than the perpetual licensing model
2) When we build software in the SaaS model we can get to market
faster than when we build it in the traditional CES way (ship it on a CD)
3) We can sell to a departmental buyer and circumvent the need to
include IT in the buying syndicate (and that’s a good thing)
4) It costs less to build a SaaS company than a classic enterprise
software company
There is no doubt I have missed a few key points. I am sure some folks are just in it for those cool “No Software” buttons that Beniof likes to hand out. For now, let’s at least start with the building blocks I listed. And to continue my devils advocacy, let’s fire up the old SaaS versus CES scoreboard and see who the winner really is in each of these arguments:
Saas – 0, Enterprise
Software - 0
Round
1: Annuity Models
Some
would argue that they can predict and model the CES approach of selling the
first thirty $100K deals directly to IT (assuming you have a compelling product
to get that high of a price in your first year). Some would gladly take on the
alternative of trying to missionary sell 100 deals at $30K a year arguing there
is less risk to the buyer which is nice for a startup.
While
the benefit of annuity models is that (once sold) annuities keep on giving (as
opposed to your standard 18% renewal maintenance on CES as a startup), you have
to be able to show one or more of the following three things very quickly about
the SaaS sales process:
1) You can sell using almost all inside sales (flying people on
planes for $30K deals is a non-starter sales model)
2) You can use a direct sales force because you’re average contract
length multiplied by your annuity price approaches the same number that CES
deals end up at (e.g. >$100K minimum is required for direct selling models
or you go bankrupt while your sales force racks up frequent flier miles)
3) You can do broader up front deals with more seat licenses and
bigger annuity value so your contract length doesn’t matter as much (e.g. $250k
site licenses).
Saas – 1, Enterprise
Software - 1
Round
2: Getting to Market Faster
Let’s
assume you actually strive to put out high quality software (in either model you
choose). And let’s also assume you do the appropriate amount of testing to
ensure high quality. No matter what development methodology you use (XP, Scrum,
Waterfall, “something those guys in engineering do”), whenever you GA a
software release you have to test it. It’s this process of testing that
differentiates SaaS from CES and is the fundamental Achilles’ heel of the CES
model.
In
the CES model you end up spending a good 30-50% of your total GA cycle time
dedicated to testing. Note that I did not say cost (e.g. you don’t usually have
a 1:1 software engineer to tester ratio), but very specifically time. The time problem
comes from the reason that in a CES model, testing complexity grown
exponentially. To test a piece of software (in either model) you have to figure
out every realistic combination of environments that your software will be
deployed into and make sure you run at least some basic tests in that
environment before shipping it. When talking about the possible environmental
variables that might affect the functioning of your software, this could be the
operating system, OS patch level, database type, database version, database
client adapter version, internet browser version, application integration API
versions, .NET framework versions, web server versions, etc.. etc.. (these are actually
only a subset of the variables we consider at Newmerix). The resulting matrix
is n-dimensional and starts to quickly look like Brian Green trying to draw a
Calabi-Yau space with a crayon. To make
matters worse, the more successful your software becomes, the quicker your
matrix grows as you have esoteric customer situations you must support. There
is always at least one customer who still has Oracle 6 running somewhere in their
IT department and would just “really love it if you could support that” for
them. In addition, the sales team is going to expand this matrix by being
opportunistic, especially in the early days. No matter how much you double
underline and highlight that you only support Oracle databases in your
requirements document, as soon as a $1MM SQL Server deal comes walking through
the door, guess what, you’re supporting SQL Server too! And in a final crushing
blow, whenever you add something to the matrix, for the most part you can never
get rid of it.
For
those of you who have not been through this virtuous startup sales cycle
before, you may be shaking your head in disbelief. Come on you say, you should
have more restraint than that. Focus on the revenue you want, not just any old
revenue in the pipeline. Well, as a business grows, I 100% agree that you
should strive for this focus. But let’s put it this way. Say your spouse is
knee deep in cooking for a dinner party you’re having that night. They send you
to the store for 5 limes they forgot to pick up for the prize entrée of the
night. Like a well trained spouse, you brave the freezing night weather and
drive around to every grocer in town only to find there is no one open that
late (it’s a Sunday). In despair you end up at the liquor store, ready to drown
your sorrows with your new friend, the tequila worm. As you’re checking out, you
notice a basket of fruit. Limes! But there are only 4 janky limes and some
lemons left. What do you do?
Well,
I guarantee you’re gonna bring home the limes no matter what they look like.
And depending on how well you know your spouse, you’re probably going to pick
up 5 lemons too. It can’t really taste that different with lemons instead of
limes, right? They are all citrus fruits after all; just a little switch to the
recipe and off we go. Clearly in this little story the VCs are your spouse,
tonight is the end of the quarter, you’re the CEO and the recipe is the
engineering team. It’s really not much different in a startup. Well okay, maybe
there is more tequila in a startup.
To
cap of this discussion, I included for your viewing pleasure a little example
from a big company. Below is a small slice of a spreadsheet I created which
shows PeopleSoft’s PeopleTools version support for the various customer database
types and release versions they test. Every time Oracle releases a new version
of PeopleTools they need to run a battery of tests against every supported
database type and version in this matrix. And this is only 2 variables in the
complete set of considerations when testing PeopleSoft. You get the point (and
I’ve cropped even this picture dramatically in both dimensions!).
I’m
going to bastardize a Winston Churchill quote here and say “Quality is just one
damn test after another”.
And
if you’re thinking to yourself, “Why not just add more testers, bare a little more
cost and get on with solving another problem already?” then please read the
Mythical Man Month. To summarize it, it just doesn’t work that way. 9 women
just can’t have a baby in 1 month. The one viable solution is to invest heavily
in testing automation. While it’s not a complete panacea for this problem, it
definitely lets you scale in a way that can’t testing just manually. While I
have a lot to say about automated testing (its one the core products Newmerix
makes and sells to packaged application owners), I won’t go into that long
digression right now. You’ll thank me later.
Back
to the plot. So along comes SaaS with its sexy web front ends, its new-fangled
ajax, and its smug little annuity model. But underneath the gloss and glam, the
best thing about SaaS is that they control their own infrastructure. Engineering
has only one customer – themselves. There is no n-dimensional testing matrix. In
fact there is only one cell in the testing environments spreadsheet – it’s
whatever the hell the SaaS company wants it to be. If you’re building a SaaS
app and you want to run Oracle 10.0.0.3 on Windows 2000 SP1 with a WebLogic 6.7
application server on powder-blue Power Macs running Parallels then fine,
that’s all you have to test! Calabi and Yau would be behind the counter at
Starbucks foaming you up a double caramel mocha latte if it was a SaaS-only
universe out there. And to make things even rosier for SaaS, you don’t get
crushed under the weight of your success (at least from the testing aspect).
There is no ever-growing QA lab full of customer environment testing machines.
It’s one environment all the time, totally predictable, totally testable.
This
singular architectural benefit allows SaaS companies to move infinitely more
quickly that CES companies. Big releases come out faster, bugs get fixed and
deployed quickly, and features can be rolled into the GA product in a much more
incremental fashion than is realistic in CES. In a SaaS model you can literally
roll out a change every week or two if you’re agile development methodology
allows this and your testing is well automated. This is simply an impossibility
in CES as it stands today. In addition, CES customers don’t want to
upgrade their applications on a weekly basis. In the SaaS model it all happens
behind the scenes.
Faster
and more incremental time to market means less of a gamble on getting the whole
thing right in one big bang and more feedback from the market as you try and
define the optimal feature sets for your customers. This, in my view is the
biggest strength of the SaaS model.
So let’s score 1 for SaaS here, by far.
Saas – 2, Enterprise
Software - 1
Round
3: The Departmental Buyer
Saas – 3, Enterprise
Software - 1
Let’s
go back to my napkin drawing for a second. The scribble represents two things.
The bottom line is the revenue curve of most CES companies over time. The top
line is the revenue curve of most SaaS companies over time. Notice two things. The
first thing is that the SaaS line gets to revenue much quicker than the CES
line. This would be consistent with witnessed phenomenon from the VCs I know.
The second and more important part of the diagram that I will have you notice
is that the lines converge at some point. I’ll argue that point is really the
average break even point for most software companies (about $10-12MM run rate).
But more subtly, at some point in time, something about the SaaS model starts
to make it more inefficient than the CES model. Or to say it in a more positive
way, something about the CES model starts to kick into gear if you get far
enough along the curve. While this scribble is clearly not going to win any HBS
awards for discrete financial modeling, there is one observed phenomenon and
one technical reason that make me think this will become a common perception very
soon.
In talking with many of my VC friends, I am seeing that the break even point
(money invested versus cashflow break even) is actually in the end pretty much
the same for SaaS and CES. As a rule of thumb, $10M is a pretty good target for
break even in a software business. Guess what, it takes about $25M of real cash
to build a solid, sustainable growing software business no matter how you do
it! Having built a few, my gut tells me this is true for a number of reasons.
Simplistically I believe that buyers (of either 2000 SaaS seats or 250K of CES
software) have the same perspective of total value in what they are buying. It
does not matter if their software is delivered through the web or sits in their
own facility. Couple this with the fact that developers write the same number
of lines of code to create that value every day whether it is delivered in a
SaaS model or a CES model. Testing may take longer in one model due to the
matrix but developers still crank out their 600-1000 lines of code a day to add
perceived value to the product.
The
technical reason why these two lines catch up with each other is that in the
SaaS model you trade off deployment complexity (e.g. what environments do I
need to deploy my software in) with deployment scalability. Its one thing for a
developer to design a system to manage the transaction or data volume of the
largest enterprise customer they can think of, its another to design a system
capable of handling the aggregate transaction volume of 400,000 customers of
all sizes. Along with that comes the DBAs, the actual hardware, the power costs
in the data center, the bandwith, etc.. In CES, Brian Arthur’s classic
increasing returns philosophy is right. Spending $50MM to make the first
Windows disk and then $3 for each copy after that is a pretty good model. In
SaaS it doesn’t really work that way. The first system might cost you $20MM to
build but your incremental delivery cost is much higher. Unfortunately scale
issues in software are non-linear – they too are an S curve, but an inverted S
curve. You really don’t want to get on the wrong side of that S. Trust me – it’s
worse than messing with Texas. In the CES model the ISV pays to handle
deployment complexity (testing) for their customer and they don’t worry about
deployment scalability. In the SaaS model the opposite it true, there is little
deployment complexity but the deployment scalability is massive. This is why
the two curves catch up with each other sooner or later. You’re really just
moving the total cost of delivering the software from pre-GA to post-GA.
I
am sure on both sides of the equation everyone can name aberrant data points (e.g.
PeopleSoft went cash flow break even for $6MM – I am not sure what that is
inflation adjusted from 1985 – maybe something like $17MM). I am sure there is
a SaaS company or two that got break even for less than $25MM invested. But
again, while its emergent phenomena, I think VCs are going to have to come to
terms with it in the next few years.
I’m
going to score on for CES here because I think this dynamic is vastly
underestimated in the investment community right now.
Saas – 3, Enterprise
Software – 2
Okay so we’re sitting here looking at a scoreboard
that clearly has SaaS in the lead. Hit the gas you say, lets go find some SaaS
companies to fund. Well hold on. I said in the next 5 years the dynamic would
shift again. All it would take is 1 point to move from one side to the other
and CES looks like a better investment again. Well keep reading and I’m going
to give you more than 1 point to consider.
Why
(for the most part) None of This Will Matter in 5 Years
1) SaaS is browser based (as opposed to client based)
2) SaaS is web based (e.g. a hosted application)
3) SaaS is multi-tenant (this actually doesn’t matter to the customer
other than from a data protection and security standpoint but it’s an
architecturally relevant note in terms of the SaaS application design)
4) SaaS is hosted by the ISV and there is no software on the customer
premise
5) SaaS runs on one infrastructure stack dictated by the ISV (as
described above)
While
these might seem like big differences, virtualization has the capability to
completely normalize them all. Let’s go through the list:
Being Browser Based
There
has been an immense amount of time and energy spent making browser-based
applications work a lot more like a classic windows client. With the arrival of
desktop remoting and application virtualization such as Microsoft’s Softricity
products,
the pendulum will swing back again and some people will simply favor using a
client. The interface is more familiar, the functionality can be designed more
directly to the application needs, and the actual user’s desktop OS becomes
less relevant (running a Windows client remotely on a Mac is becoming common
place with Parallels).
No
change in score here but an important shift to note.
Being Web Based
Once
again with remoting of desktops, there is no need to install software on each user’s
desktop. Whether its 3rd party software we’re talking about or
in-enterprise applications the trend will be to remote everything to the user’s
desktop anyway. I have another post on how virtualization is changing the face
of consumer and enterprise desktops coming. So many posts so little time.
While
this may seem the same as point #4 above note that customers do install CES web
based applications in their own environment now. Lotus Notes would be a good
example and Newmerix has a web-based product in it’s suite called
Automate!Control. Thus no
change in score here but again an important shift to note.
Multi-Tenancy
Vishal
Sikka, SAP’s CTO made a great point to me when I spoke to him last (I have
commented on this in other blog posts
as well as I think it’s so relevant). Multi-tenancy has become inaccurately synonymous
with the database layer in the last few years. Over time though, people will
shift where multi-tenancy lives in the application stack. It might live in the
database (e.g. serve all customers from one app but filter the data they see by
their customer id), application server, application itself, hardware cluster
(shared compute cloud), or simply through security policy. Assuming licensing
structures catch up with the flexibility of virtualization, there is no reason
why as a CES ISV you can’t simply replicate the complete application stack in a
VM image and use utility computing to handle the scaling issues on a per
customer basis. I think that Salesforce.com may find itself in a position of
having invested a huge amount of money in a multi-tenant database structure
only to find out that it simply doesn’t matter anymore. It’s even possible this
ground breaking design approach might become an impediment to the speed at
which they can move in the future.
This
is actually a shift in favor of both CES and SaaS players. Either type of
player can change where multi-tenancy lives to their advantage. So no change in
score here either.
Hosted Applications
With
utility computing and virtualization any classic ISV will have the option to
install their software on a VM and run it from any one of a number of
virtualized utility computing infrastructures, essentially providing their
application as a hosted service. All the complexity that SFdC deals with
related to managing a proprietary multi-tenant architecture will essentially
disappear into the cloud and hosting VMs will become a matter of policy and
security management and little else.
This
capability will drive CES ISV pricing options to include an annuity option (as
opposed to a perpetual license). CES vendors will simply install and host their
own applications on VMs. The multi-tenancy will actually come from the utility
compute grid. This VM hosting will also allow CES ISVs to target a departmental
sale as the customer does not need to manage or upgrade any of the underlying
application infrastructure. Classic CES vendors will be able to cut IT out of
the buying loop just like SaaS players.
Definitely
score 1 point back for CES here.
Infrastructure Stack
If
a CES ISV moves towards a hosted virtualized platform, they all of a sudden
gain the immense benefits of having only one reference infrastructure. As
discussed above, this massively reduces testing time, is a dynamic that does
not change with increased customer adoption, and drives the ability to get to
market in a much quicker fashion. To some extent, this advantage of virtualization
alone will end most of SaaS’s advantage. Once CES ISVs can deploy hosted VMs
running their own selected software stack, the playing field becomes instantly
leveled and disruption happens all over again in reverse.
Again
score a point back for CES, or at least consider it a wash.
If
I do the quick tally, we’re back to SaaS – 2, Enterprise Software 3 (erring on
the side of being conservative). Oh how quickly the tides shall turn!
Disruption
vs Innovation
A
number of times in this article I have mentioned disruption. I fundamentally
believe that the success of SaaS as a generalization is more about taking
advantage of market disruption than really driving new conceptual invention. I
think this strength might become its ultimate weakness as a sole investment
focus area.
There
has been massive consolidation in the CES market in the last 5 years. This has
put the large ISVs (Oracle, SAP, IBM, BEA, etc..) on the back foot in terms of
their ability to react quickly to dynamics in the market. One need only look at
Oracle’s constantly elongating Fusion roadmap timeline for proof that good CES just takes a long time. And because it takes a long
time the players must devise strategies that are worth the effort. Hence these
big companies have broad reaching, deep-infrastructure, transformational
strategies for their next big thing. Most of the time it’s just about swinging
the pendulum back in the opposite direction on the consolidate/modularize
spectrum from whichever way it has been going for the last few years.
CES
today is all about SOA, merging packaged applications and custom development
into composite applications, building out application platforms on top of existing
packaged application deployments, BI and centralization of data. The first
blarticle I ever wrote was about this very trend.
The SaaS players that have been successful basically do what the large software
companies do but faster, in a new model (annuity versus perpetual license), in
more modular fashion from a sales perspective, and with a divide and conquer
approach to the buyer. Essentially they identified the monolithic ISV-IT
juggernaut and built a speedy little boat to zigzag right around the massive
aircraft carriers that most large ISVs have become. There are always new winners
in a market when some kind of disruptive force leaves open opportunities for
new players to shake up the status quo. Sometimes this comes in the form of
architectural shifts, sometimes it’s pricing, sometimes it’s core technology
changes, and sometimes it’s a mixture of these things. To date that’s what the
most successful SaaS companies have been about. Lets be honest, there was
already more than enough CRM platforms in the world when Salesforce came along.
While I am sure many of you will disagree, I think SaaS’s successes have not
been about inventing new things, but doing existing things better.
As
a side note, I think a great analog here is the open source market. As the CTO
of Digital Persona once noted to me, the best open source applications are
those that simply copy an existing well known product and replicate it with a
different development model. We’re seeing some big wins in this space. The
classic example would be Red Hat but you also have JBOSS (now part of Red Hat), Zimbra (now part of Yahoo),
SleepyCat (acquired by Oracle),
Firefox, Xen (acquired by Cirtix),
the new Cobia (full disclosure I am an investor
in StillSecure), and the growing footprint of OpenOffice. It’s hard to name a
number of incredibly successful open source products that were completely new
innovations. The same could be argued for SaaS.
From
an investors perspective market disruptions have well defined windows. It’s
usually 7-10 years in total length with heightened activity and value creation
just past the halfway point. That’s what we’re seeing now with SFdC, NetSuite,
and other going-public SaaS plays. But this disruptive situation will taper off
in the next 3-5 years as large ISVs right their ships, buy strategic innovation
from leading SaaS companies and learn how to compete in the new market. It was
already happening when Siebel bought UpShot and now it’s happening in spades with
SAPs BBD platform.
From
an investment thesis perspective, I think it’s very useful to look at the
timeline of this current disruption. How long does it really takes for a SaaS
company to gain critical mass and how much longer is the disruption going to
last? If the window is really shrinking to less than 5 as I HAVE argued, that’s
a pretty tight window for any new software company to build a product, enter
the market and declare some kind of victory. Perhaps its time to start shifting
from investments in market disruption and go back to investments in conceptual
innovation. In the same way that investing in a social network if probably a
bad idea right now, my guess is by 2009 there will be few new companies that
get started that actually make any real money for investors in the SaaS space
doing the same thing an existing CES player is doing.
If
I Was King for Just One Day
Okay,
so it’s easy to sit around in a nice glass houses and throw lots of stones. Lest
I be accused of doing so, let me outline a few software investment theses I
would have if I was a software VC making investments in the market right now.
As mentioned, for the next 2 to maybe 3 years, simply disrupting the existing
software incumbents with web-based, annuity based recreations of the heavy
software they already sell is still going to be a smart investment. After that
(and even right now one could argue) you’ll need a change in strategy. I should
say that in the end this thesis is not about SaaS or CES at all. It’s really about
new ideas that can get to market first in whatever model makes most sense.
Regulatory
Change
Massive
amounts of IT budget goes into solutions that address changing regulatory
requirements. The most recent example is the emergence of the compliance
market. Whether it was to manage SOX, J-SOX, CLERP 9, PCI, BASEL, GLB, or
HIPAA, a number of companies emerged that provided true value to IT owners in
dealing with these issues. Virsa was bought by SAP for $400MM,
a number of security vendors were bought by the majors to handle security based
compliance, and even companies like Newmerix have benefited broadly from the
compliance backdrop of IT departments.
Management
Philosophy Change
The
biggest new trend in IT is ITIL. For those not familiar with ITIL it’s is a
service based approach to managing IT resources and doing impact analysis of
changes to the IT environment. In general, ITIL (and COSO and COBIT) or
manifestations of the fact that IT departments simply must find a better way to
manage the rate of change across the IT stack. We’ve seen companies in the
hardware stack (Troux), the OS stack (BigFix, HP/OpsWare), and the application
stack (Serena, MERQ/HP, Newmerix) all benefit from this new challenge in IT. It’s
not necessarily sexy but it’s fundamental stuff.
Sticky
Technology
As
we have learned over time, technology has a long tail in IT departments. How
many companies still run massive transaction processing systems on mainframes?
How many companies still have COBOL and FORTRAN running applications in their
IT department? How many companies are stuck with packaged application infrastructure
they bought 9 years ago to solve the Y2K problem? These things never go away –
they have 20 year life spans. Many a company has been immensely successfully
managing these core (I am avoiding the term legacy here) systems? Simple
examples would be Candle, BMC, Newmerix (so far so good), and the new rollup
Rocket Software.
The
Drive to Efficiency/Drive to Scale Pendulum
IT
companies flip flop in their macro strategy between driving to efficiency
versus driving to scale. These are long cycles (7-10 years) and repeat
themselves over and over. Examples of companies that made money off helping new
technologies scale are F5 (and all the load balancers), my last company Service
Metrics that measured the performance of web sites as people tried to deliver
scaled web sites to them, a host of players in the SAN/NAS space, and companies
like Akamai which help companies scale their web delivery infrastructure. On
the flip side, virtualization is absolutely and incredible example of the shift
back to efficiency. If you own a data center its all about consolidation,
maximization of resources, and modularization of infrastructure for efficiency.
I just got some frontline statistics on the average utilization of hardware
resources in major enterprise customers. Less than 10% is normal. That’s LESS
than 10%. Tons of money (we’ve already seen this with VMWare and Xen) will be
made helping people make better use of what they have.
Behind
the Firewall or Integration Plays
One
thing that SaaS currently struggles with is integration into existing IT
infrastructure. While SFdC has made strides in its APIs and Web Services
interfaces, it’s really not an enterprise level integration platform. In
addition, it’s still a pain to tunnel from outside the firewall back into it
for general software infrastructure. This may change over time but CES vendors
have the jump on SaaS players here. Note for example SAPs MDM or any BI product
or the recent purchase of FAST by Microsoft for enterprise search. I believe
there are huge opportunities to understand the existing application
interdependencies and build products to help integrate, transform, scale and
manage these interactions behind the firewall.
That
said, we’re starting to see SaaS integration plays that are doing well. Eloqua
is a good example. I guess when you have enough important data across multiple
systems sitting anywhere (inside or outside the firewall) there is a need to
integrate it in some shape or form.
The
Reports of My Death Have Been Greatly Exaggerated
Houses are not cheap and not every person is able to buy it. However, home loans was invented to aid people in such kind of hard situations.
Posted by: LatashaHanson22 | October 24, 2010 at 02:24 AM
I want to set up my own self-directed pension that I control and own?
How can it be possible?
Posted by: pension annuity rates | October 21, 2010 at 03:44 AM
Neil, Great info in a fun, readable form - wish more people could do it so well.
One aspect is the ability to tailor/customize CES to the exact reqmts of the deploying company - I believe this is the cost what Ravi refers to but this is where the company truly gains true competitive differentiation. If Saas restricts this ability, then a lot of this value is lost and you get 'lowest common denominator' functionality.
I score CES +1 on this as of today.
Posted by: MT | March 30, 2008 at 05:49 PM
Niel, thanks as always for an incredibly insightful analysis. I work for a small boutique consulting company that specializes in PeopleSoft applications for a specific industry. We also provide strategic and management consulting although that is secondary to our application expertise. In our world we increasingly see SaaS applications implemented or seriously considered for replacing legacy apps even though SaaS products usually offer a limited subset of the legacy apps functionality. If the SaaS product performs a key few functions better than the legacy app then the rapid implementation time and minimal internal footprint make these solutions incredibly attractive. Personally I find IT leading the charge, not departments looking to get around IT.
If you think about it this is analogous to the popularity of MP3 music files where the loss of sound quality is more than offset by the ease of storage, transferability and availability (even if most of the availability is illegal!). Even in the business world a similar process frequently takes place when corporations opt for MS SQL Server over Oracle or other more robust RDBMS alternatives. The ease of installation, maintenance and low price of SQL Server outweigh the inferior performance, lack of high end tools and platform options that other solutions provide. And I say this as a big SQL Server fan.
You correctly point out that integration is the biggest limitation of SaaS. Clients are not replacing back office systems because the linkages and efficiencies gained by integrated data flows far outweigh any software/hardware savings achieved by using outside HR, payroll, billing, GL, AR, etc., etc., products (or even solutions that combine some of those functions). In fact, the more important integration becomes the less attractive SaaS solutions are – those clients that have implemented SaaS solutions are willing to introduce (or reintroduce) manual or less automated steps into their processing cycles. In fact our company is contemplating whether we can offer some of our software in the SaaS space with the additional expertise of knowing how to integrate it seamlessly with large ERP apps. We haven’t figured that out yet…
Having been around IT since the late 80’s I’ve seen the decentralization/centralization shift occur a number of times. In one way SaaS (and hosting) is the natural evolution of TSO and the heyday of companies like EDS and CompuServe. It will be interesting to see whether virtualization is the main driver as you forecast in this constant swing.
Posted by: Wes | February 21, 2008 at 09:03 AM
Great thought provoking "blarticle" (cool name) and view on two major software deployment strategies software companies can based their focus. There is a third strategy that I'm sure you've thought about, but may not have heard much about. This approach adds a straighter S curve line in between your two curves on the napkin. There are a number of companies using a "hybrid" approach to SaaS and CES that merge "best of breed". (I know, cliche' term, but it's all I could think of.)
Approach: Build software tools on controlled platform, OS/Framework/Database of choice with Browser client as front-end (Define narrow browser support as that brings a whole new level in the support matrix). Then, through consulting approach, not CD delivery, deploy said solutions behind firewalls at a departmental level, with IT involvement through virtualization methods on predefined hardware scaled for specific deployment and/or use said companies IT purchasing power and deploy on their hardware of choice.
This methodology controls environment, manages testing in controlled labs, manage deployable environment, lengthens development cycle (where some customization/configuration is needed), shortens final deployment cycle and in general flattens costs versus revenue.
Second hybrid approach, if client doesn't want the solution in their datacenter(s) for reasons unknown, host specific dedicated server(s) in your own datacenter as a SaaS, but with CES license model. Get the benefits of SaaS and sell as CES.
There are quiet little companies and/or divisional development groups following these hybrid approaches. They're far and few in between, but I think as more software companies learn about this approach, the SaaS and CES lines will blur.
Posted by: Shane Cooper | February 04, 2008 at 07:37 AM
This is a great article, and does sort of beggar the question of why does one have to choose? This is the 21st century, and software should be able to do both SaaS and on-site deployments.
I was glad to see that you threw in the "aside" about Open Source software, as that is the real disruptive force in the 21st century.
For a great example of a company doing all of these things (and is perhaps just a copycat in all other ways), see SugarCRM, www.sugarcrm.com.
Of course, there are switching costs, both for customers and ISVs, but it seems to me that, if you're going to start from scratch, SaaS v. CES is not a relevant dichotomy. I think that is in fact the gist of your post.
However, Open Source v. Closed is a forced choice, and more important to the software industry going forward.
Just my $.02
Posted by: Jim | January 21, 2008 at 11:45 AM
excellent article Neil, but you have missed out on a dirty secret of CES: Expensive implementations by an army of consultants.
To me that is a big differentiator for the SaaS companies. The implementation cost for a customer using SaaS as opposed to CES is an order of magnitude lower.
Posted by: Ravikanth | January 19, 2008 at 03:21 PM