It is what it is....

Wednesday, October 24, 2007

Advanced Data Centers, Inc. 50 Megawatt Sacramento Project

For those of you that have a blog or maintain a website you probably are familiar with web analytics. I was first introduced to web analytics in my days at Equinix where I managed the Omniture relationship. When I first began working with Omniture they had a couple racks of servers. By the time I left Equinix, Omniture was close to 1000 racks of equipment spread across the globe. This happened over the course of a couple years. It didn't take a dummy to figure out they were onto something. A couple months after I left Equinix I started blogging and was introduced to web analytics from the point of a customer/user. I believe analytics is one of the most powerful elements of online behavior. Since the internet is subsidized by advertising dollars analytics plays a huge role. But this post isn't about analytics, it's about, as the title suggests a new 50MW data center project underway in Sacramento, CA. 50 Megawatts is the most power at any data center site that I have ever come across. This project has been and will continue to be the recipient of a good portion of my attention for quite some time....or until we are out of space and power.

After spending 7 years at Equinix I thought I was done with data centers. So much so that it made my head spin even thinking about them. Shortly after leaving, it dawned on me that the data center market is unique in that there are natural barriers to entry in the form of extremely high cost of admission. Some people claim that data centers are being commoditized and with all the new construction taking place this line of thought is getting more attention. For those of us building data centers this is exactly what we want....that is of course as long as it's not the attitude of an investor or banker.

It's positive for us in that by virtue of it's negative connotation, it naturally scares new entrants away, creating additional strains on supply. It's also good because it makes it very difficult, if not impossible, for a new entrant to raise the necessary capital to build out a large scale data center due to construction costs of approximately $1500 per square foot. That cost is enough to scare most people away alone. There are quite a few instances of major corporations in the US forgoing the outsourcing model of colocation and becoming site operators themselves. Without huge augmentation, very few companies have the scale and resources to make this work over an extended period of time. Not impossible by any stretch but certainly challenging. Probably the most challenging element in getting total executive buy-in/support takes place when the CIO meets with the CFO, telling him that he needs $250MM to build a new data center that will support their IT requirements for the next ten years. More often than not, the CFO recalls a similar discussion 18 months ago in which this same CIO was asking for budget to outsource the same type of requirements, albeit less capacity, to a colo supplier. Once the CFO realizes this isn't a prank and that this guy is serious he collects himself and pours a stiff drink(it's a late night at the office in this example so its all good.)

When we embarked on our new venture, we had these types of discussions in mind and saw an opportunity to provide a service which would give the CIO the resources to support their IT customers and, at the same time, give the CFO a heart attack at every budget meeting.

I'm getting sidetracked so I'm going to get back to the analytics component. I find it very interesting to take a close look at how people get to this blog. Some get here by entering terms on the various search engines while some get here by clicking on links on other blogs or news sites that reference this one. And for some, I have no idea how they wound up here. It has been enlightening to see what people search for when they go to google, yahoo, or whatever search engine they use. I've found that a good portion of the folks who wind up here search for common terms that we throw around every day in the data center world...things like: watts per sq foot, kva to kw conversion, 60amp 208v circuit, Equinix rack price, Savvis data centers, 365main outage, etc. You get the point. The analytics service I use allows for me to see the address and domain of these readers network and has given me some insight to what IT people within these organizations have on their plate...or what keeps them up at night. Then again, I could be so wrong and off base I may as well be fire on an frozen pond. Either way, people who wind up here on my blog appear to be in the process of determining how to address their data center requirements.

Without overstepping the boundary of turning this blog into a sales pitch, I really believed the company behind the aforementioned data center project, Advanced Data Centers(ADC), may be a valuable partner to those folks in need of capacity for their IT gear. As such, what better way to reach them than to discuss it here.

If you're one of those folks or think you will be soon, I invite you to visit the ADC website to see what we're up to and if it looks intriguing drop me a note either on this blog or via the form on the contact page of the website.

Labels: , , , , , , , , , , , , ,

Tuesday, July 03, 2007

The Data Center Cheat Sheet - What exactly are we dealing with?

It may be useful to go through a brief overview of Internet datacenter market history to properly appreciate todays market dynamics so bear with me if this is old news or a regurgitation of a not so happy time. Those times build character though :)


Over the past few years the Datacenter market has experienced a shift in power as it relates to the Datacenter Vendor and customer or prospective customer realtionships. This is a function of an imbalance in supply and demand. From 2000 to early 2005, it was a buyers market for colo and buyers played vendors off of each other to get the very best deals they could. And they were quite successful in getting the often desperate vendors to strike deals that were well below being financially healthy or sound. From the vendors perspective, they were just happy to get customers in their datcenters. After all, they had rent to pay to their landlords and sitting inventory that is not generating any money is worse than selling that inventory for anything greater than zero. Allot of poor pricing decisions were made during this window of time but they(pricing) weren't the only questionable attributes of the deals that went down during this period.

The bigger thorn in the side of most of these deals was related to what these customers were allowed to install in each of the racks or cages in the colo's. Remember the time and put yourself in a vendors shoes for a minute. You're negotiating with eBay or some other large retailer and just the thought of signing this customer makes you forget the notion of profitability. At this point stopping some of the bleeding will be a step in the right direction and as such you agree to give ebay the best rack pricing you've ever given anyone and don't put any parameters around how much power they can install and consume. Secretly you're really hoping they over provision power because that is money in your pocket that helps to offset the low rate on space you've agreed to. Sidebar definition: Over provisioning power is the scenario whereby a customer provisions 60 amps of primary 208v power(as an example) and only consumes 20amps of it. The customer pays the vendor for the full 60 amps but the vendor is only on the hook to the utility for what it uses, in this case 20amps. That is 40amps of profit right? Yes, at that particular month it was. This was quite a common situation and more often than not it was because the customers were ordering their colo configurations based on what their equipment required at full load. The issue here was that nobody was using the equipment to anywhere near capacity.

Slowly but surely the economy crawled back up and to the right(on a graphical basis) and with it came increased usage of the internet, ubiquity in broadband access, storage prices plummeting and innovation in usage of the internet in general. With the economy coming back more people were employed and they sure surfed the net at work(I think it would be intersting to see a study done on productivity output of employees with internet access and employees without it), more people had disposable income so they could afford the DSL or cable modem which allowed them to get further faster in their online worlds and gave them new ways to interact with one another via social networks which blended and intermixed with their real world lives. All of a sudden that steady 20amps of power consumption start to creep up. And up. And now frighteningly up. Up to the point that, as one VP of Ops of a big player in the space and who shall remain nameless, said,"this place could blow at any moment"

IMO, this was the point which the tables turned in favor of the vendors. By now the supply and demand was getting back to a state of equilibrium and it forced the datacenter vendors to do what I refer to as 'robbing Peter to pay Paul.' In order to fully grasp that notion you must understand what a datacenter really does. At the end of the day, a datacenter provides space, power and environmentals to it's customer sets. That is it. Datacenters don't provide managed services, service organizations do. Datacenters don't provide CDN or transit, ISPs and CDN's do. Datacenters don't provide storage, storage providers did that. We're talking about what the physical datacenter provides. Space, power, environmentals and physical security. Some may argue that these vendors provided interconnectivity and the vendors did but that was an added service layer that in actuality doesn't need to be a product of the vendor but could be the product of anyone or nobody(if it was free). When a datacenter is built you start out with a shell of a building and an amount of power that you can get delivered to that building. With that shell floor plan and that maximum amount of power available to you, you develop an overall layout of where things will go. Things being chiller, cooling towers, air handlers, generators, batteries, diesel storage, water storage, shipping and receiving, ingress/egress points, different authority levels of access, security and so on. You don't make these decisions without first knowing how much power you can get because there is a direct correlation to that amount of power and how many pieces of the Mechanical Electrical infrastructure plant will be required and how much square footage they'll occupy in the building. Long way of saying there is a finite amount of power and environental resources available for consumption. The standard increment or unit of measure in the market is either a rack or cabinet(42RU of actual space) or a sq ft. Each rack takes approx 20 sq feet of space on the datacenter floor. In order to forecast revenue, the datacenter operator simply takes the total sq footag of raised floor and divides by 20 sq ft to gt the # of available rack spaces they can sell, giving them some ability to forecast revenue. And they did forceast revenue based on these simplistic equality assuming assumptions. So if you have 50k sq ft of space you can sell 2500 cabinets. At $800/month per cabinet you'll generate $24MM in annual revenue. Sounds like a good plan right? The issue isn't it's simplicity but rather that it is only one piece of it, space. What about power? If you have 5Megawatts of power available for customer consumption across that 50k sq feet, you have a datacenter built to 100 watts a foot. If you have 7.5Megawatts of power available for consumption in that 50k sq ft, you have a datacenter built to 150 watts a foot. 10megawatts and you have 200 watts a foot. And so on.


Taking a step back, remember the example of the customer who was allowed to install the 60amps of 208v power in that single rack or those 20 sq ft? 60amps of 208v power in 20 sq ft equals about 500 watts a foot. Remember the notion of a finite amount of power coming in to the building and the linear relationship between power and the amount of space required for mechanical gear? That is because when delivering the power to the customers, the customers consume it with via the hardware infrastucture and in doing so, that hardware gets hot and gets hot quickly. Hence the beefy AC's that are required in datacenters. The same concept of the division of resources is carried over and applied to environmentals. We still don't have a global standard unit of measure for the industry because each building has different attributes and a customer may achieve higher utility in one vendors rack vs a different vendors rack because of the difference in the amount of available power in that rack. For this reason comparing Equinix rack pricing to Terremark rack pricing is useless unless you know the power per sq foot in each of their buildings. What point is there is trying to get Equinix who for examples sake has built out a datacenter at 200 watts a foot and is offering racks for $1000 each to lower it's rate to the $700 monthly fee that Terremark is offering in their datacenter which is built to 100 watts a foot. Don't you see what a screaming deal you already have with Equinix? To get that same functionality or utility at Terremark would cost you $1400 a month per rack.(Vendors and associated #s there are meant for expample purposes only). Circling back to the example earlier of eBay over provisioning those 60 amps of power or 500 watts a foot in the 100 watt per foot designed facility and you quickly realize that you, as the vendor gave up 5 racks of space and associated revenue for everyone one rack of space that eBay pays for. And pays for at the lowest rate you ever did. The deal is 5X worse than you thought. Not only that, but the perception of your company to a stranger walking in to your facility is that you are struggling because your datacenter is only 20% occupied spacewise because those first 500 racks that ebay installed consumed all of the power and cooling resources. Now imagine your the vendor who didn't catch this overprovisioning issue until you had oversubscribed your mechanical plant by a factor of 2 or 3X and you have all customers usage creeping up simultaneously. What do you do then? You say, "this place could blow at any moment" :) Those of us who lived through those types of situations and conditions will never get in them again. The first time around can be chalked up to ignorance. The second time would only be stupidity. This thought is evidenced by the hard lines the vendors take today as it relates to placing limits on the amount of power per rack they will allow their customers to install.

Taking the example from earlier with ebay using the entire pool of resources in 20% of the space of in the building and you can view it one of two ways. The first being that the supply of available space just shrank by 80% or the demand for space just increased by a factor of five. The market adjusted itself and the tables turned in favor the datacenter vendors and shows no signs that it will revert back to it's old ways. Sure, you hear allot about new datacenters being built today but remember, there hasn't been any signficant investment in this space in about ten years. During those ten years, computing clusters have gotten physically smaller and financially cheaper while increasing in performance. All of this resulting in more power consumption per rack unit, doing more in less space but with no change in the relative volume of an amp of power. Meaning the computers got more efficient in both performance and amount of space the physically take up but the power is what it is. And that is a study of physics. Efficiencies aren't a part of power, they're a part of those things that use power. Wrapping this up, the market has experienced all sorts of technological progress on hardware and software piece of the equation allowing users to pack more in to less but that less consumes exponentially more power than that more did in the previous scenario. The most scarce resource of a datacenter is power. And that means cooling too.

datacenters, data center, watts/ft, kw, kv, power, density, hvac, colo, equinix, savvis, terremark, internap, global crossing, exodus, amazon, salesforce.com, ebay, efficiency, amps, volts,

Labels: , , , , , , , , , , , , , , , , ,

Sunday, July 01, 2007

Data Center Cheat Sheet - The Players in the space

If you're looking for a place to put your computer/s because you've determined that your office closet isn't the most conducive place to host your critical business apps, customer facing service platform, customer database, website, or whatever else you're responsible for, chances are you're talking to one or more companies which provide datacenter services. The major national and international players in this space are:

Equinix- pioneered carrier neutral model - risen to the top as the 800lb guerrilla. If you want you own private 'cage' and access to a boatload of carriers and ISPs, you may want to talk to them. North America, Asia Pac, Europe

Digital Realty Trust - best performing REIT in 06 if I recall correctly - customers of DRT typically pay for construction costs of their respective datacenter in DRT buildings. Customers include Equinix, Savvis, Internap, MSFT..basically everyone with the financial wherewithal and domain expertise it takes to make the leap of no return, ie spending the cash to build out core MEP infrastructure. If you want total control of EVERYTHING which mean running day to day operations of the datacenter infrastructure and your computing infrastructure, you may want to have a chat with them. Global reach

Savvis - includes some of the assets of Exodus, Digital Island, Cable and Wirelessą„¤ Smart and experienced management team. Seems to be focused on more that colo and is 'moving up the stack' so to speak. If you're a customer of IBM or EDS they will be similar in terms of their offerings. If you want a private cage and are planning on running every aspect of your business operations they probably won't be the best fit but what the heck, maybe they can run it better than you. In which case, you may want to have a chat. Global reach

365Main - Carrier neutral, expanding rapidly, solid facilities, find current customers and get their take on overall experience. US based

CRG West - Carrier hotel centric, moving into more of a colo model recently, if you need hundreds of racks of space they probably aren't the best fit but if you need a small physical footprint in terms of space and a leveraged network footprint, they may be worth talking to. Owned by Carlyle Group which could mean they have easy access to capital but who knows how committed Carlyle is to the space. Carlyle was an original investor in Equinix and that didn't turn out as well as it should have for them so they may have less of an appetite for this space than the CRG West sales guy is telling you. US reach.

Terremark - Equinix wannabe and making great strides in removing the 'wanna' piece of it. Expanding in VA and CA, bought DataReturn which was a decent sized hosting provider. Historically built smaller sites in tier two markets with the exception of VA and CA. US reach. Great customer list but pretty much everyone on that list is a customer of all of these vendors.

Switch and Data - Very similar to Terremark but built more facilities than any of the other players, in smaller markets, and with smaller facilities(10K to 15K sq ft). Bought PAIX from Abovenet and in that regard has a great customer list but same attributes as the Terremark list.

Internap - Hesitant to include them but my experience is that they are in most of the deals floating around and are a wholesale customer of Equinix, 365Main and others as well as they do run their own sites which they acquired over the years. Their domain expertise is on the networking side and not on running datacenters but then again if you're in an Internap cage in Equinix who cares?

AT&T - the former T had some decent facilities albeit ones which weren't built to support today's computing clusters and the associated power and cooling requirements. If you work at a small bank in the Midwest and are worried about getting fired for pushing the envelope as it relates to looking outside the box, you should talk to AT&T. You may never get through their onerous contract negotiations so you may get fired anyway. If you do manage to get through and become a customer of theirs, I have a feeling you won't have too much fun second guessing your decision. IBM and T are no longer job protectors to the decision makers they sell to.

Level3 - Was a player in the space 10 years ago which is why I felt compelled and obligated to include them but don't consider them to be a true player any more. Allot changes in 10 years and you can't upgrade datacenters once they peaked out their total design, especially if you have live customers in them.

Where there is smoke there is fire and the fire here is white hot. Fire being the demand for datacenter space. As such, there are a whole bunch of smaller players emerging into the scene to do their best to take down Equinix just as a very young Equinix was trying to do to Exodus. If you're talking to these types companies I would guess that you have really small requirements, really large requirements or aren't dealing with a mission critical application. I state those three reasons not because small regional guys don't know what they're doing(how in the world could I know that?) but because the cost differential between them and Equinix or 365Main is negligible if anything. In fact, it would be logical to believe that Equinix and 365Main would actually be lower priced than a small player due to the scale they're able to achieve in purchasing, operational efficiency and learning curve. "Too small" to me means hosting your code on someone else's servers so that may be a small webhost who runs their own physical datacenter. "Too big" to me means you consume too many resources on the 'Players' building for you to be a good fit with their overall objectives.

If you're scratching your head wondering how could that(too big of a customer for Equinix?) be the case I will explain it in my next post:

- Data Center Cheat Sheet - are we a good fit based on our requirements?

Following that post will be:

- Data Center Cheat Sheet - Power and Cooling Mathematics - you will be shocked! no pun intended :)

Labels: , , , , , , , , , , , , , , , ,

Wednesday, April 18, 2007

Sun's Blackbox

Get your mind out of the gutter, I'm referring to their portable datacenter. I was able to attend one of Sun's introductory briefings today in Menlo Park. When Jonathan Schwartz first announced this as a product I was very skeptical and threatened. Skeptical because these containers are 160 sq ft and can support a 200kw draw. That is 1250 watts per foot, albeit very isolated. And threatened because of the potential disruptive effect these new devices could have on the traditional datacenter market, my livelihood. Kinda.

I'm still skeptical but not as much as I was. I'm definitely not threatened, not because I don't believe in the viability but because the two are more complimentary than exclusive.

There are a few kinks to be worked out or how shall I say, items that are quickly set aside during their presentations but what did you expect? Marketing, marketing. Anyone know Al Hops?

Anyway back to this Blackbox.... A couple issues to note:

- these are NOT stand alone units. they require:


- multiple high voltage power connections in a minimum n+1 config - approx 250kw of provisioned primary power(these aren't connections you just run an extension cord f or. these are serious high voltage connections and as such require a serious infrastructure plant to get the connections down to the voltage required by the box. you don't call PG&E up and order one of these. typically this will be a branch on a larger power grid and in the datacenter world can be likened to a 12kv branch to a PDU.

- Cold water - Blackbox units require a cold water feed to support cooling off the payload, if you will. . To support 200kw of draw is approx 30 tons of chiller for these Blackboxes. The chiller doesn't come with the Blackbox and doesn't fit on or in one. Infact, a 60 ton chiller, enough capacity for 3 boxes, is about the size of a box itself. Chillers require power to produce cold water and you don't just plug a chiller into your wall outlet and be on your way. It requires the same or similar type of connections as the Blackbox, hi voltage, hi capacity circuits.

- Water Supply - HVAC systems will lose water to condensation, evaporation, leaks, overflows, etc and that water needs to be made back up to ensure smooth sailing. Maintaining N+1 design, you need two supplies of water from seperate suppliers. One is obviously your regular water supply but what about the second? dig a well like most datacenters do?

- UPS systems. There aren't any. Seriously. So that should tell me who the target customer is. Someone who doesn't care about uptime? The why the hell buy all this crap? why not host it on Amazon S3 or MediaTemple? Who doesn't care about uptime? Google is the only company I can think of, actually amazon too, who wouldn't care if they lost 8 racks of servers. I just don't think Sun is far enough along to have a solution for UPS that doesn't make you take a step back and say, 'wait a second, where the hell am i going to park five* tractor trailers so i can operate my 24 racks?' * 3 actual Blackbox container, 1 container for Generator and batteries and one container for the chiller.

I sound like I'm bagging on Sun but I'm not really. I like the idea and know it's a definite winner in niche applications such as military use, natural disaster use, isolated locations where it can be airlifted in and so on.

The thing is, if Sun owned the entire market for those specific applications it still isn't going to get Sun where it needs to be, it's just too limited in size. Sun needs to find a way to make these Boxes the defacto standard choice when a company begins evaluating datacenter options. That or sell the concept to the colo vendors by delivering them value by showing that the Boxes can compete economically with a standard raised floor environment. Coincidentally, just like a regular datacenter, in order to support a few of these boxes you will need a significant MEP resource which is essentially the bread and butter of a datacenter and datacenter operators are experts are managing MEP. Its a nice fit.

I liken the potential of Blackbox type architecture to what consumers are using Amazon S3 grid or google's own infrastructure(googleOS) for, a shared IT resource that supports unique data for each user and leverages commonalities among users. everything is virtually connected and resources are shared so if one goes down it doesnt matter yet the performance benefits of close proximity is omnipresent.

Cost. The fully built out container(without the computers, chiller, generator and truck or helicopter to transport it) currently costs $500k to build. Sun eluded to the price point of $250k as one which they're shooting for. $250k for 200kw isn't a bad deal. Equinix spends about $25k per rack or $1000/sq ft for a 2.5kw rack. In gross #'s Suns Box looks good at $1200/kw on the Box while a traditional datacenter, per Equinix's rough costs, comes in at $10,000 per kw. I don't know what the cost of the chiller plant and elctrical switches, etc would be but imagine it can't be more than 60% of the total costs of construction of the traditional so add another $6000 per kw and mutiply that sum, $7200, by the number of kw draw and you get your total cost for the Box and the supporting MEP gear. In this case it is $1.4MM for 200kw of datacenter equivalent. For Equinix, it would cost $2MM+

Lots of potential with this product but in order to be mass adopted it needs to demonstrate an economic benefit in addition to the obvious operational ones.

Labels: , , , , , , , , , , , , ,

Thursday, February 22, 2007

Next up: GoogleNET

They are giving away everything else, why not connectivity too? Its all about efficiency right? Operational efficiency, risk mitigation efficiency and customer efficiency. Efficiency is the driving force behind Googlenet(gnet). What is GNet? Google's foray into the ISP business. This 'business' for them can be classified as marketing or a means to an end. The end being, selling advertising.

The gnet hypothesis: Bandwidth costs have fallen to a level that the advertising revenue more than subsidizes the cost of the network. We are in the very early stages of a true 'gloabl village' as Marshall McCluhan called it. The cost structure for a traditional ISP like PacBell DSL..errr AT&T, comcast, etc to supply services to the residence is around $40 per month and trends down as they grow because they get cost scale.... in the network world, the more you buy the less it costs.

goog will be placing a bet, and a very calculated one, that advertising $$ will not only subsidize operational costs of providing free services(isp, voip, office, etc), but will exceed them.

Owning the customer's network routes from end to end(being the ISP) provides Google with private solution for delivering customized content and adverting to each and every one of the people using their service. this delivery platform is always on and knows where you go, what you type, where you live, who your friends are, what files you have downloaded, what you look like, and whatever else they add on to their services. So when Johnson & Johnson or GE or Proctor & Gamble or Coca Cola or Pepsi is planning their media buys for the next year do you think they'll purchase advertisements on radio, television, print or the 4th network(Google)? based on the ability to target a specific population that has certain attributes you desire, the choice is clear...you pick google. Why? because you know that your marketing msg is going to a qualified prospect as opposed to the traditional 'shotgun' approach. plus, you can get results in realtime and tweak your msg if its not working in real time. with the other three media you are somewhat ratholed into trusting some third party for ratings that may or may not even reach the people that you want it to. by the time you figure this out a slew of things can happen...some good some bad but why chance it when you don't need to.

At the end of the day, Google is building a traditional media killer and the funny thing is..actually not really funny but kinda, that the writing is on the wall but nobody seems to believe it. I do. @Home Network had this vision but couldn't pull it off because the cable co's couldn't get their heads out of their rear to see the opportunity that was sitting right in front of them.

goog already has deals in place and likely in the works with the producers of original content which will allow their users access to that content. if i'm a content producer where would i want my content to be seen? ex. let's say you are the producer of The Office and goog offers you the ability to place your content on their network so that it can be viewed by anyone, anywhere, anytime and offers a revenue share or some other creative structure around it such that you know your worst case scenario beforehand. simple choice right? sure abc or cbs might offer an upfront fee in the form of $$ per show but the audience is limited, the timeslot is finite and in order for someone to view it, they have to purchase cable tv or satellite or whatever whereas on the google network you content would be globally accessible and the broadband access which replaces the cable or satellite tv service, is free to the masses. additionally you can develop complimentary services that engauge your viewership such that you are able to really develop a community around you content as opposed to content around a community.

What does this have to do with being an ISP? Everything, why do you think there is such a brouhaha over net neutrality? Without connectivity none of this is possible and with the right connectivity, all of it is possible and defendable.

The internet hasn't changed anything. The driving force of media is and always will be advertising. Without it, there would be no televsion, radio, print or internet. It doesn't matter if its old world or new world, it's still dependent upon advertising and gnet is to google what airwaves were to ABC, NBC and CBS, something to exploit in order to sell advertising.

Labels: , , , , , , , , , , , , , , , , , ,

Monday, February 05, 2007

Google Should buy Salesforce.com

Google is pimping search because it pays the bills. Search won't always pay the bills though, at least for Google. They know this in MtView so don't think I'm off my rocker just yet...hear me out :)

Google's largest investments are not in hiring mathematicians to write new algorithms on search. Their largest investments are in infrastructure and just happen to be funded by the revenue they realize by selling advertising within search. They are investing in infrastructure to support, for lack of a better term(seriously cant think of one), the Google OS. At the end of the day, the Google OS is the combination of the software and hardware that combine to create a 21st century mainframe. In 2006 alone, Google spent $1.6B on datacenter construction. They have forecast at least another $900M in datacenter costs in 2007. Google is probably spending more money building datacenters in these two years than the entire datacenter provider market has spent on datacenter construction in the last five years. They aren't doing this for search, they are doing it because these are the new homes for their 'mainframes'. Google is biggest ally Salesforce could ever want to saddle up with. Why? Because they are placing HUGE bets, maybe even the whole company, on the fact that software will be delivered as a service. I think it will too but that doesn't really matter now....

When software is delivered as a service the architecture is such that applications like Oracle or Siebel or MSWord or Powerpoint or Excel or Financials, etc do not live on computers that their users own, they reside on a many clusters of computers that google, in this example, owns. These clusters are what I'm calling the 21st century mainframe....hmmm, how about the GooFrame or GoogFrame...whatever. The search algorithms and engines Google use today are likely the foundation of the overlay engine that will allow for the distributed end product of $3BB, the datacenters, to function as one big ass computer that can be virtualized dynamically as demand warrants. Pundits will argue that bandwidth will challenge this theory.

Bandwidth is so plentiful that soon companies won't be able to give it away, they may even pay you to use theirs instead of their competition. Google is in the infancy of this movement now via their wifi initiatives and once they buy ATT or TW or Comcast of whoever they'll have even more reasons why giving access away makes sense. They won't have to pay the LEC if they own it. Therefore the marginal value of adding a user onto the network and being able to market to them and control the flow of data from their 'mainframes' to the users device gives them the control they want and need and surely exceeds the nominal cost of that user.

They aren't doing this to kill Microsoft or Intel or the PC industry in general...those companies will adapt. They are doing this because their end product is advertising. Advertising in 2015 won't look anything like it did in 1985, just 30 years earlier. Infact, in 2015 we probably won't even realize we're being marketed to because due to the intelligence of the systems we'll rely upon, each of our individual experiences will be unique and a function of analyzing the history of our electonic usage. This is our email, our office apps like word, xcel, ppt, data, voice, video, gaming, business finacials, www surfing, the old school networks(TV, PRint, Radio) which is EVERYTHING that Google is releasing as new products or services today! And everything they'll 'host' for you on their mainframes.

Back to the Salesforce logic. All these services have nothing to do with what their users core competencies are. The lumber yard that Bob the Builder buys his wood from doesn't have expertise in IT or in an efficient market they wouldn't need to because just as they aren't IT experts, the IT expert for whom Bob the Builder is constructing a home isn't building the house himself because its not efficient for an IT expert to build a home when there are readily available contractors. So the lumberyard doesn't know how to get its inventory online and host its financials and it's pipeline reports and maybe use some niche marketing software to run special programs to certain sets of customers and analyze all this data through one single interface in real time because that scale has never been deliverable to the unFORTUNATE 5,000,000 business on the globe.

Salesforce Apex would provide the interlocking piece that creates the true value for the user because it is a market of markets and a global one at that.. It surely isn't the datacenter building, it's what is inside but it can't go inside of nothing, thats impossible.

On the surface this speculation may seem far fetched to some and the more it seems like far fetched pie in the sky type rhetoric, the closer it probably is to being true. I speculate the market will gravitate in this direction because it allows for efficiency at the micro level, lumber focuses on lumber, IT company focuses on IT, car companies focus on cars, supply chain becomes a service industry focusing on supply chain, etc. Salesforce's Apex allows for the mashup of services that the enterprise may require to run the day to day business elements they must deal with. The combination of the mainframe, control of the communications network from end to end, hosting the business applications that make the market and using the search utilities to deliver to the customer something they never asked for but would be happy to pay for is what the Google OS is all about. Because that something that you'd gladly pay for but never asked for isn't the product of some random direct mailing, its a function of a boat load of history and data on what you and others with similar attributes as you are interested in for work, pleasure, family, etc because while google was pimping out all those services and apps they, by virtue of capturing all that data, built a silver ball which is the holy grail of marketing.


So, Google should buy Saleforce because, of all things, marketing. Google is the 4th Network, the medium of media. Let's not forget, media as an industry, produces marketing. CRM is cheap today at $5B+ relative to what, assuming I'm close to the marc(pun...get it?) CRM will be worth in a year or two years or three years. Google already put their bet on SaaS so it's a no brainer and an catalyst to a true global economy.

Labels: , , , , , , , , , , , ,