It is what it is....

Thursday, February 11, 2010

Google Net - All things considered, a cheap means to an end

So they're taking it one step further than I thought and are taking the fiber all the way to the home.

-Makes sense as they'll have zero dependence on any CLEC, RBOC, MSO or local transport layer. They'll own the traffic from the starting point to the end.

-This gives incredible scale and opens tons of opportunity to exploit mindshare to drive more revenue.

-Google building this network is kinda like the equivalent of the inverse of the Comcast purchase of NBC. Instead of NBC paying Comcast fees to carry it's traffic, NBC now has a free ride to the TV sets of Comcast subscribers. Similarly with GNet, Google now has a free ride to GNet users/subscribers/testers and bypasses paying transit and transport fees to telcos, etc.

The big question is how does this fit into their network neutrality stance. Will google let other transit providers (aka ISPs) ride this last mile to the house?

Why should they when they are the ones spending the money to lay the fiber?

My gut tells me that the answer is yes but there will be a cost. That cost could be in the form of monetary compensation from the ISP to google or that cost could be that Google gets visibility into the usage of the ISPs customers. That visibility and the historical data resulting from it is likely far more valuable to Google than the revenue they could generate by selling the last mile to alternative ISPs.

Envisioning this one step further, by connecting their network to all private networks run by telcos, isps, msos, ilecs etc, Google can become the arbitrator/clearing house of peering (traffic exchange) between consumers and traditional network operators. Creating a true, on demand, utility based connectivity model. Like a cloud for network service. Like bringing BGP to the masses by enabling it at the core instead of the edge.

Can they have their cake and eat it too? Time will tell.

Labels: , , , , , , , , , , , , ,

Wednesday, July 08, 2009

Web 3.0 Just Kicked Through the Door

In the playbook for Google's world dominance there is one play still in the developmental stage:

GoogleNet - the one missing link. No pun intended with the missing link part. get it?

The posting below is a repost from February of 2007. I'm reposting because I think it is just as applicable today as it was then.

Couple quick thoughts before my paste:

- the amount of data on every persons habits that google will be able to manipulate and exploit is scary. the Chrome OS and Android just gave google 5 times the access they had to me before. Why? Android was built to run on devices and I can't stand windows crashing every hour so i'll gladly switch to a more stable android. not to mention its free. Android will be the OS for my mobile phones, set top boxes, home entertainment systems, appliances (refrigerators, ovens, vehicle management and entertainment systems, home automation devices (remember google's forays into the home automation and electrical smart meter markets?) and probably, in the not too distant future, they'll manage our terlits too. Thats a shit load of visibility they'll have in to the patterns and habits of connected people. Again, no pun intended :)

- will the Feds just sit back and watch as Google take it deeper than Ma Bell ever dreamed possible?

They are giving away everything else, why not connectivity too? Its all about efficiency right? Operational efficiency, risk mitigation efficiency and customer efficiency. Efficiency is the driving force behind Googlenet(gnet). What is GNet? Google's foray into the ISP business. This 'business' for Google is the means to an end. At the end of the day Google sells advertising and they'll use whatever means necessary to do that, be it building an OS and giving it away for free or by providing free connectivity so that free OS stays online 24x7. The more interaction we have with devices the more impressions google has to sell to advertisers.

The gnet hypothesis: Bandwidth costs have fallen to a level that the advertising revenue more than subsidizes the cost of the network. We are in the very early stages of a true 'gloabl village' as Marshall McCluhan called it. The cost structure for a traditional ISP like PacBell DSL..errr AT&T, comcast, etc to supply services to the residence is around $40 per month and trends down as they grow because they get cost scale.... in the network world, the more you buy the less it costs.

goog will be placing a bet, and a very calculated one, that advertising $$ will not only subsidize operational costs of providing free services(isp, voip, office, etc), but will exceed them.

Owning the customer's network routes from end to end(being the ISP) provides Google with private infrastructure platform for delivering customized content and adverting to each and every one of the people using their service. this delivery platform is always on and knows where you go, what you type, where you live, who your friends are, what files you have downloaded, what you look like, and whatever else they add on to their services. So when Johnson & Johnson or GE or Proctor & Gamble or Coca Cola or Pepsi is planning their media buys for the next year do you think they'll purchase advertisements on radio, television, print or the 4th network(Google)? based on the ability to target a specific population that has certain attributes you desire, the choice is pick google. Why? because you know that your marketing msg is going to a qualified prospect as opposed to the traditional 'shotgun' approach. plus, you can get results in realtime and tweak your msg if its not working in real time. with the other three media you are somewhat ratholed into trusting some third party for ratings that may or may not even reach the people that you want it to. by the time you figure this out a slew of things can happen...some good some bad but why chance it when you don't need to.

At the end of the day, Google is building a traditional media killer and the funny thing is..actually not really funny but kinda, that the writing is on the wall but nobody seems to believe it. I do. @Home Network had this vision but couldn't pull it off because the cable co's couldn't get their heads out of their rear to see the opportunity that was sitting right in front of them.

goog already has deals in place and likely in the works with the producers of original content which will allow their users access to that content. if i'm a content producer where would i want my content to be seen? ex. let's say you are the producer of The Office and goog offers you the ability to place your content on their network so that it can be viewed by anyone, anywhere, anytime and offers a revenue share or some other creative structure around it such that you know your worst case scenario beforehand. simple choice right? sure abc or cbs might offer an upfront fee in the form of $$ per show but the audience is limited, the timeslot is finite and in order for someone to view it, they have to purchase cable tv or satellite or whatever whereas on the google network you content would be globally accessible and the broadband access which replaces the cable or satellite tv service, is free to the masses. additionally you can develop complimentary services that engauge your viewership such that you are able to really develop a community around you content as opposed to content around a community.

What does this have to do with being an ISP? Everything, why do you think there is such a brouhaha over net neutrality? Without connectivity none of this is possible and with the right connectivity, all of it is possible and defendable from the threat of new competition emerging to pose a challenge.

The internet hasn't changed anything when it comes to the bare bones media model. The driving force of media is and always will be advertising. Without it, there would be no televsion, radio, print or internet. It doesn't matter if its old world or new world, it's still dependent upon advertising and gnet is to google what airwaves were to ABC, NBC and CBS, something to exploit in order to sell advertising.

Labels: , , , , , , , , , , , , , , , , ,

Sunday, April 06, 2008

Data Centers are the Economy

The dependence on the data center today is far deeper and wider than it was in 1999. How could it not be? The Internet is no longer just another source for information threatening print, radio and television. It is THE source for information, THE source for education, THE source for communicating with individuals or to populations, THE source for commerce and trade, THE source for entertainment and THE source for a recession proof economy. At the heart of it all is the data center. The data center provides the platform which is enabling a more equitable distribution of wealth across a global stage.

Assuming this is all true, which I certainly believe it to be, there should be an epic flow of resources directed towards the buildout of the data center platform. Guess what...there isn't. The largest investments being made today are coming from Microsoft and Google. Give them credit. For all the grief the two of them take, they really "get it." And they've been rewarded financially for "getting it." But what about the innovation that is a result of the fresh ideas and new ways of doing things? Google and Microsoft no longer possess such innovative ways yet they own a large percentage of the platforms which enable such innovation. How many startups challenging MSFT, GOOG and AMZN are going to be comfortable hosting their secrets on the very companies they are trying to dethrone? I doubt too many.

There is a huge misconception that a glut of data center space is on the horizon. The fact is that perception is ill conceived and those who believe it and make investment decisions are ill informed and quite possibly passing up a once in a lifetime investment opportunity. Don't misunderstand my point of view, yes, there has been a fair amount of new data center builds announced and in some cases started but if you go back and look at some of the recent ones they are rarely don't on, the risk is minimized and the inventory rarely hits the retail supply. Take a dive into the SFBA market and examine Digital Realty Trust's two most recent builds in Santa Clara. Both were pseudo started on spec but were both completely sold out before being finished. In both cases they were leased to single, they got two new customer orders out of them. Not two new customers as both orders were singed by existing customers, but two new orders. Hardly a speculative build when you know your existing customers are about to hit a wall in terms of their available capacity and you can lead them down a golden path to expand in facilities operated by a vendor they already do business with....oh and which happen to be the only place on the West Coast where they can walk into such a situation. In the same market and right around the corner from DRTs two sold out locations there is Equinix, formerly the leader in speculative builds. Equinix is expanding its existing Santa Clara facility by around 40k sq ft and if it chose to could have the entire thing sold out today. Since their target customer isn't a 40k ft requirement and since they're in an enviable financial position, they can be choosy about who they sell the space to and find customers who will pay them top dollar for their product. This definitely won't be a startup as they are too cost sensitive and aren't worthy of extending large amounts of credit to in the form of inventory....Equinix has been there and done that and learned from the past. What about the rest of us? What about the major consolidation at both the state and federal government level which is far greater than anyone anticipates? What about the major efforts going on in enterprises across the US which all require data center facilities which are far more robust than what is currently available to them?

If data center capacity is not available for these and many more types of requirements it is a serious threat to the growth of the economy and only positions the big guys more favorably than anyone should be comfortable with.

Labels: , , , , , , , , , , ,

Wednesday, October 24, 2007

Advanced Data Centers, Inc. 50 Megawatt Sacramento Project

For those of you that have a blog or maintain a website you probably are familiar with web analytics. I was first introduced to web analytics in my days at Equinix where I managed the Omniture relationship. When I first began working with Omniture they had a couple racks of servers. By the time I left Equinix, Omniture was close to 1000 racks of equipment spread across the globe. This happened over the course of a couple years. It didn't take a dummy to figure out they were onto something. A couple months after I left Equinix I started blogging and was introduced to web analytics from the point of a customer/user. I believe analytics is one of the most powerful elements of online behavior. Since the internet is subsidized by advertising dollars analytics plays a huge role. But this post isn't about analytics, it's about, as the title suggests a new 50MW data center project underway in Sacramento, CA. 50 Megawatts is the most power at any data center site that I have ever come across. This project has been and will continue to be the recipient of a good portion of my attention for quite some time....or until we are out of space and power.

After spending 7 years at Equinix I thought I was done with data centers. So much so that it made my head spin even thinking about them. Shortly after leaving, it dawned on me that the data center market is unique in that there are natural barriers to entry in the form of extremely high cost of admission. Some people claim that data centers are being commoditized and with all the new construction taking place this line of thought is getting more attention. For those of us building data centers this is exactly what we want....that is of course as long as it's not the attitude of an investor or banker.

It's positive for us in that by virtue of it's negative connotation, it naturally scares new entrants away, creating additional strains on supply. It's also good because it makes it very difficult, if not impossible, for a new entrant to raise the necessary capital to build out a large scale data center due to construction costs of approximately $1500 per square foot. That cost is enough to scare most people away alone. There are quite a few instances of major corporations in the US forgoing the outsourcing model of colocation and becoming site operators themselves. Without huge augmentation, very few companies have the scale and resources to make this work over an extended period of time. Not impossible by any stretch but certainly challenging. Probably the most challenging element in getting total executive buy-in/support takes place when the CIO meets with the CFO, telling him that he needs $250MM to build a new data center that will support their IT requirements for the next ten years. More often than not, the CFO recalls a similar discussion 18 months ago in which this same CIO was asking for budget to outsource the same type of requirements, albeit less capacity, to a colo supplier. Once the CFO realizes this isn't a prank and that this guy is serious he collects himself and pours a stiff drink(it's a late night at the office in this example so its all good.)

When we embarked on our new venture, we had these types of discussions in mind and saw an opportunity to provide a service which would give the CIO the resources to support their IT customers and, at the same time, give the CFO a heart attack at every budget meeting.

I'm getting sidetracked so I'm going to get back to the analytics component. I find it very interesting to take a close look at how people get to this blog. Some get here by entering terms on the various search engines while some get here by clicking on links on other blogs or news sites that reference this one. And for some, I have no idea how they wound up here. It has been enlightening to see what people search for when they go to google, yahoo, or whatever search engine they use. I've found that a good portion of the folks who wind up here search for common terms that we throw around every day in the data center world...things like: watts per sq foot, kva to kw conversion, 60amp 208v circuit, Equinix rack price, Savvis data centers, 365main outage, etc. You get the point. The analytics service I use allows for me to see the address and domain of these readers network and has given me some insight to what IT people within these organizations have on their plate...or what keeps them up at night. Then again, I could be so wrong and off base I may as well be fire on an frozen pond. Either way, people who wind up here on my blog appear to be in the process of determining how to address their data center requirements.

Without overstepping the boundary of turning this blog into a sales pitch, I really believed the company behind the aforementioned data center project, Advanced Data Centers(ADC), may be a valuable partner to those folks in need of capacity for their IT gear. As such, what better way to reach them than to discuss it here.

If you're one of those folks or think you will be soon, I invite you to visit the ADC website to see what we're up to and if it looks intriguing drop me a note either on this blog or via the form on the contact page of the website.

Labels: , , , , , , , , , , , , ,

Friday, October 05, 2007

Level3’s smoke and mirrors

Level3 didn't slash pricing for CDN service with their recent 'same price as transit' marketing ploy. If anyone slashed pricing, Amazon did with the introduction of their S3 storage. All Level3 did was imply their baseline CDN service is no better than their transit....heck, at least they admit it. A couple points to note. Baseline CDN service really is transit or no better than transit from Internap or some other route aggregator and Level3 may have lost allot of money over the years so it should come as no surprise that they are at it again. This time with some method to their madness, they baiting customers and will up sell them on value added services or give the baseline product away for free but demand a share of the customer's revenue generated as a result of distributing content. In essence subsidizing the delivery costs of that content.

Questions that come to mind about Level3's shuffling of the product pricing boxes: How much is their storage going to cost? What about those companies who need some DRM functionality for their downloads? How much with that cost? What about custom players or layered advertising? How much with they charge for that and what percentage of the advertising revenue will they require you give them for using their CDN?

I read their new delivery pricing works out to about $11 per Mbps equivalent. Not a bad rate, but certainly not as low as some of the pricing from other players in the space. And definitely not free, yet. It will be free sooner than later because it will be subsidized by advertising dollars.

First it may be helpful to explain the difference between transit of data and distribution of data. Everyone on the internet has transit. It is the ability to send packets from one place to another. Inherently there is some form of intelligence built into transit because it is built into the core backbone routers of the major carriers and isps.

Example: you're sitting at your computer on Sunday morning checking your fantasy football scoring on the Sportsline website. You live in CA and have Comcast as your ISP. Sportsline is hosted in FL at the Terramark datacenter(not sure if this is actually the case and am using as an example only) and connected to the Internet in Terramark via Sprint. When you type in the url and hit return a request gets sent from you computer out your cable modem and onto Comcast's network. Comcast's routers see the IP address of where that packet is supposed to go, looks up in it's routing table what would be the quickest way to get that data off of it's network and onto the Sprint network. Since quickest doesn't always translate into best and in this case only translates into Comcast not wanting to carry the cost of that packet on it's network your experience is subject to all sorts of hiccups along the way. That first packets eventually gets to FL and Sportsline and when it does, the same process, only in the inverse order, happens again. The updated score makes it's way back to you in CA and all is well. In this instance the experience wasn't all that bad but the information you requested was quite small in volume of data required to get you that score. Imagine if, instead of the updated score, you were requesting the infamous Paris Hilton and Rick Soloman video. The file size of that video is exponentially larger than that of the score of the fantasy game and the cost to Comcast and Sprint is also exponentially higher to deliver than the score. Fortunately for you, Rick Soloman was out to make some dough and wanted the user experience to be similar to his own experience so he hired a CDN to help ensure the quality of the experience remained tip top :) Important detail to note, CDN's all buy transit and most buy it from multiple upstream providers. In this made up example, Rick was u ACME CDN who was a new CDN. Unlike Akamai, ACME was built from the ground up to deliver large video files and wasn't too concerned with the last mile. As such, they had a smaller # of nodes placed in carrier neutral data centers on the two coasts and a couple in the South and Midwest. Since ACME isn't a web hosting company, Rick needed to contract with a second vendor to host the rest of his homepage. Rick is a sharp cat and a penny pincher so he used Amazon's S3 file storage service which would itself have been sufficient had the video been of Rick and anyone but Paris. He understood S3's limits and since he was out to make add to the stack of paper he was blowing on Paris, he only relied on Amazon for the non revenue generating files. When you click on the 'watch' button on his site the url where it sends you is not on Amazon's infrastructure but on that of ACME's. Similar to the fantasy scoring example, Comcast's routers determine that the destination address is part of address space supplied to ACME and easily accessible via SBC who happens to be one of Comcasts upstream providers. Comcast dumps this packet onto SBC. Unlike Akamai, ACME's technology doesn't rely upon multiple DNS queries to determine where the originating or destination IP address is physically located. ACME uses anycast protocol/technology which allows it to have the same ASN at all of it's nodes. Another difference with ACME is that they aren't using a cacheing setup but instead are acting as the origin for their customers files. This allows them to guarantee that 100% of their customers files to be distributed will always be on 100% of their nodes. When using a cache, this can't be the case as only the most requested files remain on the node for a matter of time and then they're purged. Its great if you're popular but not so great if you're not popular of are very private data but depend on high performance. When SBC gets that packet it see's the AS it's destined for and dumps it as soon as possible. In this case that hap pend to take place in the same building, Equinix in San Jose. Both ACME and SBC are colocated at Equinix so there was minimal cost to SBC to deliver that packet which makes them happy to deal with ACME. Can't say the same for their dealings with Akamai because Akamai's nodes are out at the edges of SBC's network, in the CO's which would mean SBC has to keep those packets on it's network far longer than their ideal. Amazon's S3 cloud is located in VA and had Rick not used ACME for the origin location of the video, a similar process for delivering the fantasy football score would have occurred with the deliver of the video. Fortunately for you, Rick's decision to use ACME results in your viewing the video almost instantly and not waiting around for the packets to go back and forth from the West coast to the East coast.

So what does this have to do with Level3 pricing or the pricing of the CDN market in general? It highlights how much of a difference adding an origin storage component to a CDN can make in terms of route miles. If Level3 can charge customers more money for using less route miles then all the better, it may even make sense to give transit away if they can make up that $11/Mbps by making customers believe they're being so cutting edge with pricing and product positioning when in reality they're stroking themselves by driving more efficiency on their network.

Ohhhhh, if that were only the case. I guess theory sounds good on paper but reality is what it is.

Labels: , , , , , , , , , , , , , , , , ,

Monday, August 27, 2007

Net Neutrality: The Ugly Bride

Over the next 20 years net neutrality will be left at the alter many, many times. At&T already did it once when they had no way to provide cable modem service because @Home Network had been the choice of partner for the cable co's. Meaning AT&T didn't have a broadband connection into the home. What did they do? They found the other kid who wasn't invited to the party, AOL, and cried all the way up to the principals office where they screamed and yelled about how poorly they were treated and how they should be included. Funny how shortly after the TCI acquisition was approved they slowly crawled across and into the anti neutrality, anti gov't intervention, pro free market camp.

What about Google? Google talks a good game but like the Barons of the past, they're so rich and powerful that it's easy to be preaching the socialistic smoke screen of open access when the commotion you're making distracts the attention from the real reason you need open access policies: Advertising $$.

Before I get into why, this thesis hinges on where I see Google taking it's road map over the next ten years. Google will become and will accomplish during those 120 months:

- already is today and always will be, and DON'T EVER FORGET IT, generating revenue by getting a piece of the advertising and marketing budgets of BUSINESSES worldwide (Sound familiar? It should because you know many similar companies. Companies like NBC, CBS, ABC, FOX, etc)

- building out an IP network that will kick ass on anything we have today because they'll squeeze every little drop of 'utility' out of each tiny unit of that network, driving the revenue per packet higher and higher the longer that packet stays on it's network. The big deal in this? They will give access and access device away for free to anyone who wants one. Just like the bikes at the googleplex, take one when you need one and leave it for the next guy to use when you're done. Since they're free, there really is no reason to steal them because there is no reason to buy one, right? Riiiight.

- will emerge as the leading global wireless phone carrier, giving access to voice and data via free gPhones to the masses.

- will enable a true global village as Marshall McCluhan predicted. providing ubiquity in access reach, speed and scope. no kids left behind, just like the California school system. uh huh.

How could such a beacon of industry and a company that is studied and admired as the pinnacle of American enterprise show its true colors and be so fickle? Flip flopping from one side to the other? I love answering my own questions and since nobody else is raising their hand I will tell you; first, its real simple and sheds some light on why net neutrality is a pipe dream. Net neutrality itself is a concept and notion created by the telco's and in cooperation with their opposition, which pits each other in a one extreme to the other showdown. Nobody will ever win this showdown because if someone did it would then be over and if it's over how can they conveniently flip or flop to the other side when need be? Everyone wants to have their cake and eat it too and when your talking about something you can see, touch and feel, the history of any given involved party is easily forgotten, effectively masking economic warning signs. The most obvious one being the saying, "if it's to good to be true, it is and never will be true."

It's all about one $ and maximizing how many times that same $ can be 'turned over' during it's lifetime. For this example, my definition of 'turned over' is the value of the taxes the US gov't(or any type of tax collector) collects on each dollar it spends, collects, spends, collects, etc...

Keeping that packet on your network all the time ensures you can have more opportunity to market to the user, provides them with more and deeper analytics of this persons habits, a deeper understanding of what is important to this person and what are they hiding. After all, the easiest way to get someone to do some action is by guilting them into it.

Yes, businesses will do what they can to maintain an advantage in the market for the assets they've built or acquired. This is nothing new and is happening on behalf of every industry, not just telecom. And it is expected. If you elected an official who is uneducated and susceptible to rhetoric and hype you have the ability to change that instead of throwing more rhetoric and hype on top of it. The fact of the matter is that with or without net neutrality, market innovations will happen that shake up the landscape. We're seeing it now with wifi. In a few years the net neutrality debate as it relates to in ground last mile connections to the home will likely be a laughable memory because of innovations by companies addressing a market deficiency such as lack of competition for the last mile connection. Wifi, WiMAX, satellite or any other protocol/standard/whatever bypasses the last mile altogether, thus bypassing the ILEC, MSO, etc.

So, if you really want a competitive market for alternatives to any of the technological services we have today, the best way to ensure that is to allow those companies to keep their systems closed and have tiered offerings because the capitalistic market forces will produce alternative, innovative, ground breaking and industry shattering service offerings. Does Napster ring a bell?

Another thing to consider, the telcos(distributors) rely on the content producers(websites, aps's, isv's, etc) as much as the producers rely on the telcos. They are complimentary in nature and it is in their best interest to play together because the combination of the two is what makes each, in its own right, attractive. What would Yahoo or Google be without the internet? Why did GM pull that all electric car they used to have off of the market and now why is Tesla coming out with an even better one? Because the oil companies needed GM as much as GM needed the oil companies.

We aren't fans, we are participants. Just how fans often confuse their favorite team signing a questionable player or a player saying something negative about their team, these industries are made up by for-profit businesses which must show a return on investment or they will vanish faster than they got here. Don't be fooled by the smoke screens!!!

Labels: , , , , , , , , , , , , ,

Saturday, July 14, 2007

Logic and Advertising on Facebook

Is it me or does there seem to be a growing number of naysayers bagging on Facebook as an advertising platform? It could be me but I don't think and this is why; Of the 26M unique visitors for the month of May 2007, 13M were older than 24 years of age. Of that 13M, 10M were above age 35!

If advertising on Facebook isn't showing a return then advertising on any web property should be questioned as well. This is because, at the end of the day, Facebook isn't just about college students, it's about a college educated audience that is smarter than the advertisers. To me, that poor click through % sounds like a statement from the users that the advertisers are doing a poor job on the creative front and the in your face front. Facebook users are educated people and generally speaking, educated people question things, especially when they are groomed with the notion that advertisers are like used car salesman. Slimy, untrustworthy, fickle, after the quick buck and once their done with you they are on to the next victim. Whether that is true or not, it doesn't matter, it's a perception and in this case perception may be reality.

Does this mean Facebook isn't worthy of the attention or prospective valuations floating around? Hell no, for if that was the case then NBC, ABC, CBS, Clear Channel, Fox, Viacom and all the other advertising dependent 'networks' would be less than worthless because they have exponentially greater expenses than Facebook. This means Facebook is the catalyst to the transformation of advertising as we've known it which is best described as the effort to create fear, uncertainty and doubt all wrapped around a call to action, into a relic of the past. In doing so they are resetting expectations of and possibly causing a reevaluation of prior efforts by advertisers and more specifically, their agencies. It's about time that ad click throughs and pageviews be tossed aside as the main metric for placing a value on marketing to a set of users. I can tell you with a straight face that in my 15 years of being on the internet, I have clicked on less than ten banner ads yet bought tens of thousands, if not a hundred thousand dollars worth of goods and services online. I can't tell you the last time I looked at a banner ad and thought it was intriguing. Because they aren't. They are a one way street and often an intrusive obstacle in my daily routine when they impede the performance of a website or do that overwrite or splash page crap they do on and sometimes on Cnet. If that isn't a reason to not click on an ad then I don't know what is.

It's about time the advertisers start giving something of value to the internet ecosystem as opposed to throwing shit on a wall and expecting revenue to flow their way. Some people may argue that they do give back via the $15B+ a year they spend on online advertising which is revenue to the companies in the business of generating revenue by selling ads. But that is not adding recognizable or measurable value to the marketees, those of us who they try to get to click on their banners, the everyday users, all of us.

What is different about the internet than the three other media networks, TV, print and radio? It's measurable. Really measurable. The way the internet works is much different and precise and specific than that of a broadcast network in that measuring a broadcasts audience is at best a guess and at worst a hope. On the internet it is possible to track how many users came to your website, where they were from, how long they stayed, what OS they were running, what browser they were using, where they came from and where they went when they left and get that info as it is happening. Those features are what makes the Internet an 11 on a scale of 1 to 10 as way to gauge effectiveness of a particular effort, in this case marketing. On the broadcast mediums you can afford to be lazy because there is no way to track how effective a particular effort is other than by sales numbers but that doesn't mean an increase or decrease in sales numbers are directly attributable to that marketing effort. There are plenty of companies that did and/or do ZERO advertising yet generate hundreds of millions of dollars a year in revenue. Its not that they don't market themselves, they definitely do, but they don't advertise. Big difference. Advertising is a form of marketing and with it come negative perceptions. Perceptions that for many, are so ingrained deep inside that they will never be unseated but can be lessened if and only if there is a sense of trust between the marketee and the marketor.

How is that ever going to happen? The most simple form is recommendations or word of mouth. When someone you trust recommends something you are more likely to believe them than you are some Madison Avenue marketing dude who created the 'coke is it' campaign. More like coke is shit because it is bad for you and how many times have you had a friend come up to you and recommend Coke? For anyone who grew up in the 70s 80s or 90s I'm referring to the brand of soft drink :) Conversely, how did you hear about Google, YouTube, Facebook or Thomas Keller's Restaurants like French Laundry, Bouchon or Perse? All leaders in their respective fields. I can tell you one thing for sure, it wasn't from advertising and that is a fact.

Does this mean we're in an economic bubble and the sky is falling? Absolutely not. Does it mean companies like advertising agencies on Madison Ave and ad networks placing banner ads on sites need to adapt to the internet? You betcha. Huh, how can an internet advertising network not be adapted to the internet by virtue of it's existence? That is very simple, they are applying 19th century methodologies and beliefs to a 21st century audience and platform. Do you take your horse and buggy to the gas station to refuel? No. Then why assume your 200+ year old demand creation theory is applicable now?

What I am getting at is that generally speaking the internet is not so much an advertising platform as it is a branding platform. That is very powerful for both sides of the market, the advertisers and those being advertised to because that means that poor advertising can be harmful to a company's place in the market as easily as subtle branding can strengthen it. Over the long haul, subtly reinforcing a brand without creating fear, uncertainty or doubt will only strengthen that brand and build trust with customers which will lead to them recommending the products and services associated to that brand to their friends and acquaintances which drive revenue growth. The same thing that advertising was supposed to do but apparently isn't. In conclusion, it's not Facebook or their users being a non marketable audience, it's that they're being targeted by 200 year old theories that were never tested or proved to be effective in the first place.

Labels: , , , , , , , , , , , , ,

Thursday, July 12, 2007

The Datacenter Cheat Sheet: One size doesn't fit all

Choosing a datacenter is no trivial task. For the majority of sophisticated companies, the days of the one stop shop are history. This wasn't the case seven years ago when you weren't fired for choosing Exodus as your provider of datacenter services. It wasn't uncommon to add redundancy to your colo'd environment by installing gear in multiple Exodus facilities. Maybe one on the West coast and one on the East and you were a hero. Today Exodus is no more albeit some of their remnants remain with Savvis and Digital Realty Trust and a fresh crew of cautious datacenter providers have stepped up to the plate, each with a focus on fulfilling some market need. What could have possibly changed in the market to have such dramatic implications in such a short period of time? Networks(backbones, last mile options, overlay). Hardware got smaller but more dense. Software as a service is real. Ubiquity in access speeds and availability. Efficiency enablement...if there is such a phrase. Increase in demand for electicity. Access to capital. While all those don't need to be crossed off on everyones check list for most companies embarking on the discovery process of finding the right colo vendor, the bricks of the road for which we traveled.

The past two years have brought a change in the way datacenter vendors charge for the services they provide to customers. There was a time when you could compare the costs of doing business with one vendor to the other by their cost per rack or cabinet. If you're doing that today you could be in for a big surprise. In fact, if the first question your prospective vendor asks you is, 'how much space do you need?', you should consider making a bee line for the closest exit because the vendor should be telling you how much space you need based on your power requirement.

For most providers, a rack takes up 25 square feet of datacenter floor space. The issue is that floor space isn't all created equally. The 'players' today have varying power densities ranging from 60 watts a foot to 200 watts a foot. Thus, the rack in the 60 watts a foot facility provides less than 1/3 of the utility(in an economic sense) as the rack in the 200 watts a foot facility. I'm getting sidetracked but felt those were important points to make considering the newness of the industry in general and the lack of a standard unit of measure amongst vendors. I will dive deeper into the power issue and how to really compare one vendor vs another in my next post of power. And it may shock you. Sorry, couldn't resist :)

For simplicity, I will break down the buy side market and evaluate each section the way the supplier, not the buyer, would. We'll go generic and call the three sections:

Small(up to 100kw)
- both in footprint(space) and power consumption(<3kw per rack)
- Customers in this category may have a requirement for a rack or two, perhaps up to ten. Power required for each rack is a single 20amp circuit and perhaps a redundant circuit as backup to the single 20amp. At 25 feet a rack and ten racks, this customer has a requirement of 250 sq ft and approx 30kw of power. This customer likely isn't as concerned with the cost of power as they are with the proximity of the datacenter to their office. The exception to that last statement would be with carriers and isp's who colo in carrier neutral sites as they tend to have relatively low power requirements and do most of their management via remote login.

Medium(between 100kw and 500kw)
- in this category the vendor is most likely telling the customer how much space they'll need. Reason being, not all sites are equal. Example, a customer could go to Equinix whose pre 2007 sites are built to around 120 watts a foot and be told they need to purchase 125 cabinets worth of floor space in order to support their power requirement. In this example the customer may only need 50 cabinets with 10kw in each one. That is only two 208v 30amp power circuits per cabinet, or 100 power circuits total but the capacity on those circuits are 5kw each(actually 4.9kw at 80% of their gross capacity). This is a standard, run of the mill setup for companies like eBay, Amazon, Google(albeit different circuit types but roughly the same total kw per rack), Salesforce, Youtube, etc.

Large customers (greater than 500kw)
- In addition to the customers mentioned above, this category includes most of the top 50 internet companies as well as many enterprises that may never colo their gear. The majority of users from this category are not colo'd in third party sites, they build and operate their own datacenters. While there are a handful that overlap(Yahoo, Facebook, Google, eBay, Internap, etc)those tend to be the internet related companies, not the traditional enterprise. I believe this is due to the nature of their growth and the fact that they had to start somewhere which likely was in a colo datacenter such as Exodus, Equinix, AT&T,e tc. Overtime, these types of companies built up their internal expertise in operating the physical components of a datacenter. This provided them them with the skill set to operate their own sites their growing demands for and expenses of outsourced datacenter space made investigating the possibility of building and operating their own sites a no brainer. Since running a datacenter is nothing new and not limited to internet companies, there is an established ecosystem of companies providing services and products to the operators of datacenters. As such, there are companies whose business it is to manage the day to day operations of any datacenter but the one with the asset on the book or the livelihood on the line, still have inhouse expertise in all the areas of design, operations, maintenance, etc. It's way too large of an investment and all too important to the day to day operations of the businesses in general to not have an inhouse staff of engineers and operations experts.

Along with the growth in demand for computing space came the revenue being generated from the services being delivered by the assets in these datacenters. More revenue = More access to capital + greater borrowing power + more flexibility in provisioning + less emphasis on planning + operational transparency to internal customers + increased costs controls + increasing costs of outsourcing = Decision to build, buy or lease and operate datacenter assets.

Before we get into who fits where, it may be useful to first go over why all vendors aren't going after the large customer and why if they do they limit the number of that type of customer to one per datacenter. Vendors want to fill their sites up and just like the buyers, they want to get the most bang for their buck. The difference is the vendors buck is already spent so they must get the most return on that buck. As with most markets, pricing is somewhat volume driven so the more you buy the cheaper it is. There really is no logic to that other than at some point the large customer would be spending enough if he was paying small customer pricing to put that spend towards building his own site and not outsourcing at all. For right or wrong that dynamic just is what it is.

At any rate, lets use an example using a ficticious vendor named XXX, inc. who just opened a 100k sq foot datacenter with 15MW of sellable power. XXX could fit 5000 cabinets in this space and deliver 150 watts a foot. XXX has a good sales team who uncovers two opportunities with prospective customers who have a requirement of 5MW each. In order to get these deals they will need to get real competitive on the pricing structure for the space for these customers, well below their retail rates. Often colo vendors are lured by landing the big name customer and the oomph such an incident would add to effort selling out a site quickly which pleases investors who are more likely to make additional if they can see success on the initial ones. Today, most of the big guys in the market have learned that giving all or most of the buildings resources to a single user are wise to the pitfalls wh customers is that they don't want to buy a bunch of additional services from along with such an arrangement.

The issue is that the you can't mix a large companys requirements with the colo vendors abilities and get a happy outcome because success to one may likely mean failure to another. It isn't so much that the low price hurts the vendors as it is the lack of operational flexibility the customers experience with vendors hurts the customers experience. In doing so, increasing the attractiveness of insourcing. For A perfect storm was in the brewing for a flood of investments being made in ensuring the ability to grow, and grow on your own terms, drive TCO down, less dependency on outside personnel and a bunch more factors, features and functions of running your own datacenter compared to colo'ing in a third party site.

This dynamic doesn't impact the vendors we know and love today as the opportunity cost of servicing a 5000 sq ft customer compared to selling that 5k sq ft to 200 different customers is far too great to not take notice. Assumptions for example: cab = 20sq ft, pricing excludes power, large deal would get 30% discount off retail, a cross connect costs $200 per month, big customers buy 10 cc's and small customers buy three.

Example of revenue generated by single customer in 5k sq ft:

250 cabs X ($560 discounted rate + $2000 for ten cross connects) = $142k
$142K divided by 5000 sq ft = $22.72 in revenue per sq ft

Example of selling the 250 cabs to individual customers:

250 cabs X ($800 per cab + 600 for cross connects)= $280K
$280k divided by 5000 sq ft = $56 in revenue per sq ft

Ahh, the reason it really isn't that good of a deal for a retail vendor to take too many anchor deals, the opportunity cost is huge!

Labels: , , , , , , , , , , , , , ,

Tuesday, July 03, 2007

The Data Center Cheat Sheet - What exactly are we dealing with?

It may be useful to go through a brief overview of Internet datacenter market history to properly appreciate todays market dynamics so bear with me if this is old news or a regurgitation of a not so happy time. Those times build character though :)

Over the past few years the Datacenter market has experienced a shift in power as it relates to the Datacenter Vendor and customer or prospective customer realtionships. This is a function of an imbalance in supply and demand. From 2000 to early 2005, it was a buyers market for colo and buyers played vendors off of each other to get the very best deals they could. And they were quite successful in getting the often desperate vendors to strike deals that were well below being financially healthy or sound. From the vendors perspective, they were just happy to get customers in their datcenters. After all, they had rent to pay to their landlords and sitting inventory that is not generating any money is worse than selling that inventory for anything greater than zero. Allot of poor pricing decisions were made during this window of time but they(pricing) weren't the only questionable attributes of the deals that went down during this period.

The bigger thorn in the side of most of these deals was related to what these customers were allowed to install in each of the racks or cages in the colo's. Remember the time and put yourself in a vendors shoes for a minute. You're negotiating with eBay or some other large retailer and just the thought of signing this customer makes you forget the notion of profitability. At this point stopping some of the bleeding will be a step in the right direction and as such you agree to give ebay the best rack pricing you've ever given anyone and don't put any parameters around how much power they can install and consume. Secretly you're really hoping they over provision power because that is money in your pocket that helps to offset the low rate on space you've agreed to. Sidebar definition: Over provisioning power is the scenario whereby a customer provisions 60 amps of primary 208v power(as an example) and only consumes 20amps of it. The customer pays the vendor for the full 60 amps but the vendor is only on the hook to the utility for what it uses, in this case 20amps. That is 40amps of profit right? Yes, at that particular month it was. This was quite a common situation and more often than not it was because the customers were ordering their colo configurations based on what their equipment required at full load. The issue here was that nobody was using the equipment to anywhere near capacity.

Slowly but surely the economy crawled back up and to the right(on a graphical basis) and with it came increased usage of the internet, ubiquity in broadband access, storage prices plummeting and innovation in usage of the internet in general. With the economy coming back more people were employed and they sure surfed the net at work(I think it would be intersting to see a study done on productivity output of employees with internet access and employees without it), more people had disposable income so they could afford the DSL or cable modem which allowed them to get further faster in their online worlds and gave them new ways to interact with one another via social networks which blended and intermixed with their real world lives. All of a sudden that steady 20amps of power consumption start to creep up. And up. And now frighteningly up. Up to the point that, as one VP of Ops of a big player in the space and who shall remain nameless, said,"this place could blow at any moment"

IMO, this was the point which the tables turned in favor of the vendors. By now the supply and demand was getting back to a state of equilibrium and it forced the datacenter vendors to do what I refer to as 'robbing Peter to pay Paul.' In order to fully grasp that notion you must understand what a datacenter really does. At the end of the day, a datacenter provides space, power and environmentals to it's customer sets. That is it. Datacenters don't provide managed services, service organizations do. Datacenters don't provide CDN or transit, ISPs and CDN's do. Datacenters don't provide storage, storage providers did that. We're talking about what the physical datacenter provides. Space, power, environmentals and physical security. Some may argue that these vendors provided interconnectivity and the vendors did but that was an added service layer that in actuality doesn't need to be a product of the vendor but could be the product of anyone or nobody(if it was free). When a datacenter is built you start out with a shell of a building and an amount of power that you can get delivered to that building. With that shell floor plan and that maximum amount of power available to you, you develop an overall layout of where things will go. Things being chiller, cooling towers, air handlers, generators, batteries, diesel storage, water storage, shipping and receiving, ingress/egress points, different authority levels of access, security and so on. You don't make these decisions without first knowing how much power you can get because there is a direct correlation to that amount of power and how many pieces of the Mechanical Electrical infrastructure plant will be required and how much square footage they'll occupy in the building. Long way of saying there is a finite amount of power and environental resources available for consumption. The standard increment or unit of measure in the market is either a rack or cabinet(42RU of actual space) or a sq ft. Each rack takes approx 20 sq feet of space on the datacenter floor. In order to forecast revenue, the datacenter operator simply takes the total sq footag of raised floor and divides by 20 sq ft to gt the # of available rack spaces they can sell, giving them some ability to forecast revenue. And they did forceast revenue based on these simplistic equality assuming assumptions. So if you have 50k sq ft of space you can sell 2500 cabinets. At $800/month per cabinet you'll generate $24MM in annual revenue. Sounds like a good plan right? The issue isn't it's simplicity but rather that it is only one piece of it, space. What about power? If you have 5Megawatts of power available for customer consumption across that 50k sq feet, you have a datacenter built to 100 watts a foot. If you have 7.5Megawatts of power available for consumption in that 50k sq ft, you have a datacenter built to 150 watts a foot. 10megawatts and you have 200 watts a foot. And so on.

Taking a step back, remember the example of the customer who was allowed to install the 60amps of 208v power in that single rack or those 20 sq ft? 60amps of 208v power in 20 sq ft equals about 500 watts a foot. Remember the notion of a finite amount of power coming in to the building and the linear relationship between power and the amount of space required for mechanical gear? That is because when delivering the power to the customers, the customers consume it with via the hardware infrastucture and in doing so, that hardware gets hot and gets hot quickly. Hence the beefy AC's that are required in datacenters. The same concept of the division of resources is carried over and applied to environmentals. We still don't have a global standard unit of measure for the industry because each building has different attributes and a customer may achieve higher utility in one vendors rack vs a different vendors rack because of the difference in the amount of available power in that rack. For this reason comparing Equinix rack pricing to Terremark rack pricing is useless unless you know the power per sq foot in each of their buildings. What point is there is trying to get Equinix who for examples sake has built out a datacenter at 200 watts a foot and is offering racks for $1000 each to lower it's rate to the $700 monthly fee that Terremark is offering in their datacenter which is built to 100 watts a foot. Don't you see what a screaming deal you already have with Equinix? To get that same functionality or utility at Terremark would cost you $1400 a month per rack.(Vendors and associated #s there are meant for expample purposes only). Circling back to the example earlier of eBay over provisioning those 60 amps of power or 500 watts a foot in the 100 watt per foot designed facility and you quickly realize that you, as the vendor gave up 5 racks of space and associated revenue for everyone one rack of space that eBay pays for. And pays for at the lowest rate you ever did. The deal is 5X worse than you thought. Not only that, but the perception of your company to a stranger walking in to your facility is that you are struggling because your datacenter is only 20% occupied spacewise because those first 500 racks that ebay installed consumed all of the power and cooling resources. Now imagine your the vendor who didn't catch this overprovisioning issue until you had oversubscribed your mechanical plant by a factor of 2 or 3X and you have all customers usage creeping up simultaneously. What do you do then? You say, "this place could blow at any moment" :) Those of us who lived through those types of situations and conditions will never get in them again. The first time around can be chalked up to ignorance. The second time would only be stupidity. This thought is evidenced by the hard lines the vendors take today as it relates to placing limits on the amount of power per rack they will allow their customers to install.

Taking the example from earlier with ebay using the entire pool of resources in 20% of the space of in the building and you can view it one of two ways. The first being that the supply of available space just shrank by 80% or the demand for space just increased by a factor of five. The market adjusted itself and the tables turned in favor the datacenter vendors and shows no signs that it will revert back to it's old ways. Sure, you hear allot about new datacenters being built today but remember, there hasn't been any signficant investment in this space in about ten years. During those ten years, computing clusters have gotten physically smaller and financially cheaper while increasing in performance. All of this resulting in more power consumption per rack unit, doing more in less space but with no change in the relative volume of an amp of power. Meaning the computers got more efficient in both performance and amount of space the physically take up but the power is what it is. And that is a study of physics. Efficiencies aren't a part of power, they're a part of those things that use power. Wrapping this up, the market has experienced all sorts of technological progress on hardware and software piece of the equation allowing users to pack more in to less but that less consumes exponentially more power than that more did in the previous scenario. The most scarce resource of a datacenter is power. And that means cooling too.

datacenters, data center, watts/ft, kw, kv, power, density, hvac, colo, equinix, savvis, terremark, internap, global crossing, exodus, amazon,, ebay, efficiency, amps, volts,

Labels: , , , , , , , , , , , , , , , , ,

Sunday, July 01, 2007

Data Center Cheat Sheet - The Players in the space

If you're looking for a place to put your computer/s because you've determined that your office closet isn't the most conducive place to host your critical business apps, customer facing service platform, customer database, website, or whatever else you're responsible for, chances are you're talking to one or more companies which provide datacenter services. The major national and international players in this space are:

Equinix- pioneered carrier neutral model - risen to the top as the 800lb guerrilla. If you want you own private 'cage' and access to a boatload of carriers and ISPs, you may want to talk to them. North America, Asia Pac, Europe

Digital Realty Trust - best performing REIT in 06 if I recall correctly - customers of DRT typically pay for construction costs of their respective datacenter in DRT buildings. Customers include Equinix, Savvis, Internap, MSFT..basically everyone with the financial wherewithal and domain expertise it takes to make the leap of no return, ie spending the cash to build out core MEP infrastructure. If you want total control of EVERYTHING which mean running day to day operations of the datacenter infrastructure and your computing infrastructure, you may want to have a chat with them. Global reach

Savvis - includes some of the assets of Exodus, Digital Island, Cable and Wireless। Smart and experienced management team. Seems to be focused on more that colo and is 'moving up the stack' so to speak. If you're a customer of IBM or EDS they will be similar in terms of their offerings. If you want a private cage and are planning on running every aspect of your business operations they probably won't be the best fit but what the heck, maybe they can run it better than you. In which case, you may want to have a chat. Global reach

365Main - Carrier neutral, expanding rapidly, solid facilities, find current customers and get their take on overall experience. US based

CRG West - Carrier hotel centric, moving into more of a colo model recently, if you need hundreds of racks of space they probably aren't the best fit but if you need a small physical footprint in terms of space and a leveraged network footprint, they may be worth talking to. Owned by Carlyle Group which could mean they have easy access to capital but who knows how committed Carlyle is to the space. Carlyle was an original investor in Equinix and that didn't turn out as well as it should have for them so they may have less of an appetite for this space than the CRG West sales guy is telling you. US reach.

Terremark - Equinix wannabe and making great strides in removing the 'wanna' piece of it. Expanding in VA and CA, bought DataReturn which was a decent sized hosting provider. Historically built smaller sites in tier two markets with the exception of VA and CA. US reach. Great customer list but pretty much everyone on that list is a customer of all of these vendors.

Switch and Data - Very similar to Terremark but built more facilities than any of the other players, in smaller markets, and with smaller facilities(10K to 15K sq ft). Bought PAIX from Abovenet and in that regard has a great customer list but same attributes as the Terremark list.

Internap - Hesitant to include them but my experience is that they are in most of the deals floating around and are a wholesale customer of Equinix, 365Main and others as well as they do run their own sites which they acquired over the years. Their domain expertise is on the networking side and not on running datacenters but then again if you're in an Internap cage in Equinix who cares?

AT&T - the former T had some decent facilities albeit ones which weren't built to support today's computing clusters and the associated power and cooling requirements. If you work at a small bank in the Midwest and are worried about getting fired for pushing the envelope as it relates to looking outside the box, you should talk to AT&T. You may never get through their onerous contract negotiations so you may get fired anyway. If you do manage to get through and become a customer of theirs, I have a feeling you won't have too much fun second guessing your decision. IBM and T are no longer job protectors to the decision makers they sell to.

Level3 - Was a player in the space 10 years ago which is why I felt compelled and obligated to include them but don't consider them to be a true player any more. Allot changes in 10 years and you can't upgrade datacenters once they peaked out their total design, especially if you have live customers in them.

Where there is smoke there is fire and the fire here is white hot. Fire being the demand for datacenter space. As such, there are a whole bunch of smaller players emerging into the scene to do their best to take down Equinix just as a very young Equinix was trying to do to Exodus. If you're talking to these types companies I would guess that you have really small requirements, really large requirements or aren't dealing with a mission critical application. I state those three reasons not because small regional guys don't know what they're doing(how in the world could I know that?) but because the cost differential between them and Equinix or 365Main is negligible if anything. In fact, it would be logical to believe that Equinix and 365Main would actually be lower priced than a small player due to the scale they're able to achieve in purchasing, operational efficiency and learning curve. "Too small" to me means hosting your code on someone else's servers so that may be a small webhost who runs their own physical datacenter. "Too big" to me means you consume too many resources on the 'Players' building for you to be a good fit with their overall objectives.

If you're scratching your head wondering how could that(too big of a customer for Equinix?) be the case I will explain it in my next post:

- Data Center Cheat Sheet - are we a good fit based on our requirements?

Following that post will be:

- Data Center Cheat Sheet - Power and Cooling Mathematics - you will be shocked! no pun intended :)

Labels: , , , , , , , , , , , , , , , ,

Friday, May 11, 2007

Is Comcast going to buy Yahoo?

A good deal has been written about Brian Roberts, CEO Comcast, announcement that 160mbps connections are just around the corner. The problem is the internet was built on best effort relationships and is nothing more than a bunch of networks connected together with no quality control on where or how these various networks interconnect. Without some standard for interconnections it is impossible for Comcast to guarantee any performance of packets once they leave the Comcast network. The network is as strong as it's weakest link so even though you may have the ability to get 160mbps, chances are you won't if the origin of your requested content is not physically in/on your AS.

IMHO this is more about setting the stage for a new approach to the comcast network than bandwidth to the internet in general. It is in Comcast's best interest to get as much capacity on their own network as possible because it gives them a huge advantage over the likes of Google, MSFT, YHOO, Fox, etc when it comes to QoS on specific applications. Besides, there is no possible way this could scale out to the internet and that is on purpose....meaning comcast will never be able to guarantee 160mbps anywhere but on their own network. These ever evolving broadband local networks with some overlay backbone connecting them together is the ideal situation for p2p distribution. At the end of the day, comcast wants to keep those packets on it's network as long as possible because then and only then do they maintain 100% control. In doing so they have leverage with the customers who are mainly home users. Service for the enterprise is not far behind nor hard to accomplish...a simple dedicated line between the enterprise and comcast's network and the enterprise is connected. Or better yet, the enterprise could outsource their IT services such as email, CRM, storage, collaboration, voice, video conf. etc. to Comcast and be guaranteed XX Mbps throughput from an employees homes to the corporate environment. Given all of the local and federal incentives for going green which will trickle down to driving companies to promote telecommuting even more than they are now, this is actually a valuable service to be able to offer. Add video conferencing, voip, VOD, etc on top of it and they'll have a network that offers an enterprise customer the most efficient way to address all of their voice/IP/IT requirements. In doing so it also de-stresses comcast's reliance on upstream links by using them less(assuming p2p is in place) and relying more and more on peering. Comcast was one of the early peering sluts so to speak, when other MSOs thought peering was what you did when you teamed up with your second grade classmate to do a bookreport, Comcast was signing network peering deals with anyone who sends a fair amount of traffic to them or vice versa.

I have mentioned Surewest previously and they are currently offering fiber to the home in Sacramento. For customers where this is available, everything (voice, HDTV, IP, VOD, catv, etc) are delivered over IP. It may seem cutting edge and far too complicated for a dumb old cable company to accomplish but comcast is no TCI even if they are saddled with some of the TCI infrastructure, Brian Roberts gets it and embraced it long ago. If that isn't enough, IP really is proving the most efficient delivery medium available. @home was developing products and applications around these virtues and Comcast has merely expanded on that vision.

IMHO, so take it for what its worth, Comcast and Google are the two powerhouses today. Comcast should buy Yahoo or better yet, pull a trifecta Comcast/MSFT/YHOO combo. That would be the google killer. Why bring in MSFT? They have lots of cash, a great revenue stream, lifelong customers and can turn office apps into a hosted infrastructure which yhoo no comcast have done. All of these services may even be free to the end user because they'll all be subsidized by advertising. After all, it is advertising that drives goog's revs and it will be advertising or subscription fees(I view them as the same as they should be mutually exclusive) that surpass licensing revenue of software vendors. Remember, advertisers are fickle and will spend their dollars where they think they are going to get the best return. As such, when a comcast user needs to access google servers, it is comcast who controls how well that user gets that packet back and it isn't necessarily in their best interest to make it smooth and seemless. Think about it, causing poor user experiences with google is one way to discourage ad spend with google. One piece at a time and sooner or later it all adds up to a significant threat to google's so called "unfair advantage"

Labels: , , , , , , , , , , , ,

Wednesday, April 18, 2007

Sun's Blackbox

Get your mind out of the gutter, I'm referring to their portable datacenter. I was able to attend one of Sun's introductory briefings today in Menlo Park. When Jonathan Schwartz first announced this as a product I was very skeptical and threatened. Skeptical because these containers are 160 sq ft and can support a 200kw draw. That is 1250 watts per foot, albeit very isolated. And threatened because of the potential disruptive effect these new devices could have on the traditional datacenter market, my livelihood. Kinda.

I'm still skeptical but not as much as I was. I'm definitely not threatened, not because I don't believe in the viability but because the two are more complimentary than exclusive.

There are a few kinks to be worked out or how shall I say, items that are quickly set aside during their presentations but what did you expect? Marketing, marketing. Anyone know Al Hops?

Anyway back to this Blackbox.... A couple issues to note:

- these are NOT stand alone units. they require:

- multiple high voltage power connections in a minimum n+1 config - approx 250kw of provisioned primary power(these aren't connections you just run an extension cord f or. these are serious high voltage connections and as such require a serious infrastructure plant to get the connections down to the voltage required by the box. you don't call PG&E up and order one of these. typically this will be a branch on a larger power grid and in the datacenter world can be likened to a 12kv branch to a PDU.

- Cold water - Blackbox units require a cold water feed to support cooling off the payload, if you will. . To support 200kw of draw is approx 30 tons of chiller for these Blackboxes. The chiller doesn't come with the Blackbox and doesn't fit on or in one. Infact, a 60 ton chiller, enough capacity for 3 boxes, is about the size of a box itself. Chillers require power to produce cold water and you don't just plug a chiller into your wall outlet and be on your way. It requires the same or similar type of connections as the Blackbox, hi voltage, hi capacity circuits.

- Water Supply - HVAC systems will lose water to condensation, evaporation, leaks, overflows, etc and that water needs to be made back up to ensure smooth sailing. Maintaining N+1 design, you need two supplies of water from seperate suppliers. One is obviously your regular water supply but what about the second? dig a well like most datacenters do?

- UPS systems. There aren't any. Seriously. So that should tell me who the target customer is. Someone who doesn't care about uptime? The why the hell buy all this crap? why not host it on Amazon S3 or MediaTemple? Who doesn't care about uptime? Google is the only company I can think of, actually amazon too, who wouldn't care if they lost 8 racks of servers. I just don't think Sun is far enough along to have a solution for UPS that doesn't make you take a step back and say, 'wait a second, where the hell am i going to park five* tractor trailers so i can operate my 24 racks?' * 3 actual Blackbox container, 1 container for Generator and batteries and one container for the chiller.

I sound like I'm bagging on Sun but I'm not really. I like the idea and know it's a definite winner in niche applications such as military use, natural disaster use, isolated locations where it can be airlifted in and so on.

The thing is, if Sun owned the entire market for those specific applications it still isn't going to get Sun where it needs to be, it's just too limited in size. Sun needs to find a way to make these Boxes the defacto standard choice when a company begins evaluating datacenter options. That or sell the concept to the colo vendors by delivering them value by showing that the Boxes can compete economically with a standard raised floor environment. Coincidentally, just like a regular datacenter, in order to support a few of these boxes you will need a significant MEP resource which is essentially the bread and butter of a datacenter and datacenter operators are experts are managing MEP. Its a nice fit.

I liken the potential of Blackbox type architecture to what consumers are using Amazon S3 grid or google's own infrastructure(googleOS) for, a shared IT resource that supports unique data for each user and leverages commonalities among users. everything is virtually connected and resources are shared so if one goes down it doesnt matter yet the performance benefits of close proximity is omnipresent.

Cost. The fully built out container(without the computers, chiller, generator and truck or helicopter to transport it) currently costs $500k to build. Sun eluded to the price point of $250k as one which they're shooting for. $250k for 200kw isn't a bad deal. Equinix spends about $25k per rack or $1000/sq ft for a 2.5kw rack. In gross #'s Suns Box looks good at $1200/kw on the Box while a traditional datacenter, per Equinix's rough costs, comes in at $10,000 per kw. I don't know what the cost of the chiller plant and elctrical switches, etc would be but imagine it can't be more than 60% of the total costs of construction of the traditional so add another $6000 per kw and mutiply that sum, $7200, by the number of kw draw and you get your total cost for the Box and the supporting MEP gear. In this case it is $1.4MM for 200kw of datacenter equivalent. For Equinix, it would cost $2MM+

Lots of potential with this product but in order to be mass adopted it needs to demonstrate an economic benefit in addition to the obvious operational ones.

Labels: , , , , , , , , , , , , ,

Saturday, March 31, 2007

No bubble here. Just efficiency.

Who said bubble? Nick Douglas has an interesting post on why he believes we aren't in a bubble. Ok ok, it's more of a rant on old timers(30yo and above according to Zuckerburg) that continually piss Chicken Little off by using his 'sky is falling' bit and not giving credit where credit is due. Nick may be right and I think he is. This isn't a bubble.

From one old timer(yes that would be me, 13 years ago in May I started my post educational career with NETCOM, who btw, was the very first pure play internet company to do an IPO) to another and to you non bubble veterans; this is different. Much different.

Why you ask? Here are a few important facts that come to mind:

- ubiquity in broadband access - access to a bb connection has grown exponentially. you don't even need a land line. wireless providers like sprint offer 700-800kbps connections that work. many consumers have a choice of which broadband provider they use. both of those issues were a pipe dream in web1.0 for lack of a better term. soon bb will be free. yes, i mean you will get a free broadband connection, likely wifi, wimax or some other wireless last mile so to speak. what am i talking about? see below

- a $15B a year advertising platform. huh? show me one F100 company with a marketing budget who doesn't buy online advertising. they aren't just experimenting people, they are budgeting a major piece if not the majority of their ad dollars in the online arena. why you ask? wake up!! it is the most efficient way to target their audience. example, if you use google mail or calendar or picasso or blogger or any other of their hosted apps, chances are pretty high that google knows much more about you than you think. don't believe it? next time you open your gmail account and open a msg in it, look at the ads. they are targeted by the words used in the text/body of your msg. what does this have to do with a bubble you ask? stay with me because it means everything. for example sake, we will assume i am a baseball card fanatic(no, I'm not but I'm too lame to think of something else and it doesn't really matter anyway) but not just any baseball card fanatic, I'm one who likes only the Fleer brand. Well now, google in this example(or whoever google shares info with) has knowledge based on my emails and blogging and searches that I like Fleer cards. so what. well for starters google can approach fleer and offer them a very targeted customer acquisition approach that is SUCCESS BASED. meaning fleer doesn't pay a dime unless they get a result. in the past fleer has advertised in trade rags and through partnerships, neither of which are cost effective or efficient. fleer is one example but now lets take a look at the local card dealer in my neighborhood. this card dealer now has the ability to target me too and it too can be success based. this guys advertising up until now has been on the back of the little league opening day pamphlet that gets tossed in the garbage or left in the bleachers to blow away and be lost until the mowers chop it to pieces. the only similarity to web1.0 is that there are still lawnmowers and the baseball card dealer is still in business(remember he wasn't supposed to be because the internet would replace him and all of the other realtors, car dealers, stockbrokers, grocery stores, etc).

if that isn't enough here is more;

- web1.0 ideas were great but ahead of their time. remember the asp/msp space? guys like nonstopnet, one secure, totality, exodus, etc. those companies are who paved the way for companies like, google(yes google), microsoft(yes microsoft) and apple(yes apple) to be doing what they're doing today. and that is selling software as a service. it's not like web1.0 where startups were selling services to other startups. this is the real deal. every enterprise today has email. where is it hosted? most of the time it is NOT hosted on the premises, it is outsourced to companies like Yahoo or Critical Path(old timers may get the irony on that one). this is startups selling to enterprises and enterprises are buying because it is the most efficient way for them to do business. it's all about efficiency and the current form of the internet offers an astronomically more efficient platform than we were graced with ten years ago.

- cost of bandwidth. a t1(that is a 1.54Mbps connection) cost over $1000 per month ten years ago for the access to the isp port, whose backbone was probably way oversold so you never stood a chance of getting that to the internet, plus the local transport fee(aka local loop) of approximately $500 a month to the local telco. That works out to $1000 per mbps. Today i get 8mbps for like $40 a month from my cable company. that would have cost me $8000 ten years ago and i would have had to purchase $10k + worth of hardware and know how to manage it. today my mom is online and she is the least common denominator when it comes to clue. that is just ten years. she btw, was a naysayer back in web1.0 but you won't hear her dissing internet companies now. lets examine the efficiency gain to the consumer. what used to cost $8k a month and be available only to consumers with a very specific technical skill is now available to every tom, dick and harry with no tech skills for $40 a month. That is a 500% gain in the utility of each dollar i spend for internet access and that doesn't touch the growth in the viable user population.

- storage - storage was expensive ten years ago and ten years from now we will think it was expensive today. cost is one piece of the storage puzzle but not all of it. we didn't have SOX, HIPPA and all of the other government regulations we do today. a couple things have fueled this besides uncle sam rearing his big head and they are, by virtue of the efficiency of the internet, more communication occurs and as such, the exponential demand placed on the ability access to the data of those communications. historically, if i had a conversation with my coworker, the employer never knew it even took place. today if i have a conversation(IM or email, or voip) the employer is required to record the digital content of such communications. what was an interaction that was available only to me and my coworker is now available to anyone with permission to view it. this is exponential growth in the population of demand creation as it pertains to the data that originated from an interaction amongst my coworker and me.

Due to uncle sam's regulations, companies must hold on to copies of that communication for a period of time. these companies aren't cutting down trees and printing this stuff on reams and reams of paper. they're putting it on disk. proctor and gamble's internal communications for a single month probably used to kill an entire forest and cost millions of dollars to store in a secure facility and take thousands of man hours and thousand of dollars in gas and diesel to transport now can be done virtually and put on a small piece of plastic(sorry, i know it's deeper than just being plastic but i'm trying to prove a point) and accessed by anyone anywhere. that is a far cry from the IT guy having to call up iron mountain and have them fedex the tape from june of 1999 to the office. we've become accustomed to accessing our own personal data be it photos, music, writings, news, live video of your dog napping all day, whatever floats your boat, it all requires storage and we all have access to it. we didn't ten years ago.

as Nick Douglas points out, it seems like a fair amount of this naysaying is coming from old schoolers players who alarmed the masses with web2. over and out comments and Mark "only a moron would buy youtube" Cuban. if i'm sitting on the sidelines and i hear a billionaire and his compadre trying to scare people away from investing in the exact sectors they do, I think one thing and it's sales 101, FUD - Fear Uncertainty and Doubt only they using it inversely. most sales people use it to bring you in, they use it to scare you away. gotta love the effort, scare the competition away and there are less $ in the investment funds which means the cost of capital goes up which means Mark(he's an investor) and VCs get better returns and lessen their own risk. the fact of the matter is if you're in the internet space today, you chose to be. that wasn't the case 14 years ago. back then if you were a realtor you weren't part of the internet. today if you're a realtor you don't have to be a member of the internet community at large but chances are it's not even a decision you conscientiously make, you just are because it makes your job more efficient. If an investor really believed that there was no opportunity to get a high rate of return on investments which, at their core, are bringing efficiency to an already established market(something a certain someone refers to as being a copycat) then we would all be driving Fords and flying on PanAm, PSA and TWA. But we aren't because investors provided the capital to young upstarts whose sole intent it was to be a copycat, just a more efficient cat than the copied cat.

I don't know this as a fact but I would find it hard to believe that the aggregate success rate on VC investments as defined by an exit which produces a return of at least the initial investment and the cost of that money over that period has gone up or down much from 1997 to 2007. The supply of $ to invest has probably increased which logically means the actual invested $ is greater now than it was ten years ago which implies that more money will be lost next year than it was in 1998. You don't need to be able to explain the Black-Scholes option pricing model to pick that up. It's common sense, right? It's cyclical, right? Yes to both but go back and look at history and tie successful investments, as defined above, in both peaks and valleys and you will probably see those investments were made in copycat companies. Does Google ring a bell? Does ring a bell? Does Apple ring a bell? Does Oracle ring a bell? Does Siebel ring a bell? Does etrade ring a bell? Does Schwab ring a bell? Does Ticketmaster ring a bell? They should, they're all copying someone before them only doing it in a more efficient manner.

Point is if you define a bubble as a period of time in which many companies fail then we have never not been in a bubble. If it is defined as a period in which there exist a larger supply of $ to invest than in the past then you skipped the macro econ class that explained supply and demand, velocity of money and trickle down effects of the survival of the fittest, or in this case most efficient, attributes of free market capitalism. You see, we don't need anyone to tell us we're in a bubble, if we were the market would be telling us and the dollars would be shifting towards greater efficiencies in financial investing and resource allocation... ie, advertising $$.

I'm a big believer in Marshall McCluhan's theory on the global village and how it has the ability to transform what are/were isolated micro economies into a more grand macro economy while providing a platform for their seemless interoperability, aka trade, amongst themselves. If you have even an inkling of that notion then you have to recognize the irony of it all (third paragraph from the bottom). If people believe what was supposed to be possible isn't possible then the opportunity for VCs or other suppliers of money to exploit that gap between the possible and reality is nil.

Bubble it is not. Easier access to money it is. Easier access to money means greater competition to sell your money to buyers(entrepreneurs). Greater competition means the odds of any one investment being the home run you need go down. The easiest way to make the competition go away? FEAR, UNCERTAINTY and DOUBT.

Chicken Little can sleep well tonight. He will find the sky is still there, above him, when he wakes up in the morning.

When the online advertising market stops growing or shows signs of shrinkage when compared to the actual dollars spent the period before, then you should worry that what is possible and what is real are too far apart to be the same and recognize it for what it is and know that naturally, just like a sonet ring, there are self healing attributes which will prevail and the market will retrace to a state of equilibrium. If that happens, it won't take long for the hyping of the 'what is possible' to start all over again.

Labels: , , , , , , , , , , , , , , , ,

Tuesday, March 13, 2007

Google to Viacom: Bring it on!!

The quote below from Michael Arrington at Techcrunch is telling:

"The DMCA goes out the door under certain circumstances, like profiting from violations, refusing to take down offending content, etc."

YT profiting? Yeah right. My understanding is YT was a big money loser on its own and certainly wasn't generating a profit ever. And surely they aren't now, they're effectively being amortized and $1.6B is alot of write off. Sure google is profitable but that won't be hard for anyone with a clue to separate why YT is a drain on profits to goog.

To my knowledge, YT has no reasonably offensive content on its service and even if they did, it wouldn't matter unless it was viacom's content so this doesn't seem to be applicable. I'm not sure what other examples Michael means when he says 'etc' but this lawsuit is no big deal to Google. In fact, I imagine they played viacom like Sumner plays the skin flute, masterfully.

5 years from now google will have squashed viacom and whoever else goes illogical on them. Squashed because viacom just committed a slow and painful death to themselves. Who are they(viacom) kidding trying to go up against the new media king, google? Viacom will learn that the internet, or IP, is the most efficient way of delivering content. more efficient than the other three networks; TV, Radio and Print. When google rolls out free internet access to anyone in a coverage area and owns the entire user experience or has significant influence on it, they will flick the viacoms, cbs', ge's, fox's, etc of the world off their back like a fly being smacked off a horses the horses tail :)

Face it, Viacom makes money selling advertising. Google sells more advertising than anyone in the world and can buy any company that poses a threat. As we know, Google is branching out to TV and Print and even Radio. When they own all four distribution channels and combine all their advertising packages to one network, where will viacom be?

Google's real threat isn't Viacom or any other lawsuit that comes their way. Their biggest threat is the DOJ. Comparing Google's scope to the scope of Ma Bell pre breakup is like comparing the Airbus 380 to a Boeing 737. Google, the A380, dwarfs the 737 in terms of scope, capabilities, load, range, power, etc. Sneaky Google is. They are getting as close to the edge as they can without grossly crossing it in the eyes of the general public. Will DOJ intervene? Who knows but if they set the precedent with AT&T, where the hell are they now? Oh yeah, how could I forget...they're busy protecting our youth from steroids and other human growth hormones.

Labels: , , , , , , , ,