Traffic Jam: Are Internet Businesses Measuring Up?
There are issues (read problems) with each method of calculating Web site traffic. It may well be that none of the figures cited above is accurate. Although you may have a tenuous grasp on your sanity each time you hear from the advertisers, you may believe thats acceptable. After all, this isnt supposed to be rocket sciencejust a method of estimation. The more troubling issue, however, is which number will you use for your SEC filings?
The Origins and Purpose of Internet Metrics
Measuring audience size is far from a new concept. Virtually since their inception, the television and radio industries have measured the number of people in the audience in order to determine advertising rates. Today, the cornerstones of television audience measurementreach, frequency, gross rating pointsare standardized by Nielsen Media Research. As Nielsen argues (from its slightly biased position) an independent, third-party measurement system embracing the highest standards of accuracy and integrity is integral to the functioning of the television marketplace.1
Like their counterparts placing television ads, Internet advertisers purchase ad real estate on Web sites. The critical inquiry for an Internet advertiser is whether an advertisement on Web site X will maximize its advertising dollars by reaching more of its target audience than an advertisement on Web site Y. To answer this question, advertisers rely on data from three sources: the sites themselves, advertising aggregators like DoubleClick that sell Web site advertising space, and Internet marketing research firms like Nielsen/Net Ratings and Media Metrix. This data, known as Internet metrics, typically includes information about the number of times a given Web page is requested or loaded, the number of visitors to the Web site, and the number of times an ad is clicked on.
Just as the Nielsen ratings can make or break a television program, Internet metrics have become gospel in the Internet Economy.2 However, [t]here are only dozens of channels to grade on TV; there are millions of Web sites to be ranked.3 Moreover, unlike Nielsen ratings, Internet metrics are used for more than allocating advertising. As originally envisioned, audience measurement using Internet metrics was intended [only] to provide online advertisers with an independent, third-party opinion on site audience size and quality, as a check on the veracity of the numbers provided by self-interested dot.coms.4 Nevertheless, analysts and investors soon co-opted these metrics to fuel their valuation models.
How Internet Metrics Became a Valuation Tool
As research analysts, consumers and dot.coms themselves struggled for a method to value fledgling businesses, people began to look to Internet metrics as a viable means to assess a companys growth potential. Indeed, traditional metrics such as price/earnings ratios were of no help in valuing dot.coms with concededly no profits or even a foreseeable horizon within which profits might emerge. Yet, no matter how the business models or earnings of dot.coms differed, advertising was the one constant that could be compared, apples to apples, and advertising is predicated on Internet metrics.
At the same time, companies realized that their stock prices could be influenced by the Internet metrics they released. That realization prompted a practice we will refer to as metric shopping. Metric shopping (coined from the term forum shopping) refers to a companys disclosure of the Internet metrics produced by the advertising aggregator or marketing research firm that suggests the highest audience numbers, even though those numbers may be inaccurate, or at the very least inconsistent with numbers generated by others.
As numerous articles from 1999 and 2000 illustrated, Internet metrics gained as much prestige with investors and analysts, who relied heavily on traffic reports when buying and selling stocks, as they had with advertisers.5 Indeed, rating figures became regular features in investment banking research on dot.coms,6 and the number of unique visitors to a companys Web site appeared to be a positive and significant predictor of stock price fluctuations.7 The problem is, Internet metrics subscribe to no uniform standard of measurement.
Internet Metrics and SEC Filings
If companies are promoting a valuation measure that may have no bearing on value, where does that leave investors? In general, the answer is caveat emptorlet the buyer beware. Disclosures of metric inaccuracies in SEC filings are paltry at best. Issuers may find a prominent place in their documents to highlight the number of unique users to their Web sites, but provide only a fine print, general, boilerplate disclaimer as to the numbers reliability.
Since the purpose of securities laws is to promote full disclosure and protect the investing public, shouldnt dot.coms be more forthcoming about the inaccuracies of the metrics they are touting? More to the point, since there is no consensus on how to calculate Internet metrics, much less about what they mean, how can publicly traded companies feature these numbers in prospectuses or press releases at all? Finally, when companies know, or have reason to know, that two or more credible sources have produced conflicting numbers for the very same metrics, how can they selectively disclose and highlight only the highest numbers without violating securities laws that prohibit materially misleading statements and omissions? In short, why are companies permitted to use these ratings in official documents with complete impunity?
Internet Metrics Gone Bad
Two examples demonstrate the problems Internet metrics present to investors and to the companies that tout them as indications of value. First, in a secondary offering in May 1999, Theglobe.com raised $70 million after boasting on the cover of its stock prospectus that it had 10.2 million users.8 That figure, which came from market researcher DoubleClick, was more than triple the 3.2 million visitors [calculated] by [DoubleClicks competitor] Media Metrix.9 Nevertheless, Theglobe.com publicized only the DoubleClick number in its prospectus, without even mentioning the Media Metrix tally. Approximately one month later, however, Theglobe.com learned that the DoubleClick number it had highlighted was wrong. In contrast to the earlier fanfare, Theglobe.com quietly retracted its statement [regarding the DoubleClick traffic figure] declin[ing] to comment. Interestingly, DoubleClick blamed the miscalculation on human error and fault[ed] Theglobe.com for using the data in a securities document without permission.10
Alta Vista encountered similar difficulties with Internet metrics. In December 1999, Alta Vista filed for a $281 million initial public offering after seeing its traffic numbers climb to greater than 13.5 million users, making it the ninth-largest Web property according to Media Metrix. However, when Media Metrix reported February traffic figures in March 2000, Alta Vista appeared to have suffered a dramatic decline to 12.3 million visitors, lowering its ranking to the 13th largest Web property. Because of this devastating traffic report (which would have been the most recent number available at the time of the planned IPO) and the downturn in the market, Alta Vista decided to pull its offering. In essence, the companys ability to go public and secure an acceptable valuation appeared largely to hinge on positive traffic numbers from Media Metrix. Alluding to the fact that Alta Vistas own records showed that the number of users actually increased during February, an Alta Vista official claimed [Media Metrixs] inaccuracy, from our perspective, was devastating, and were still trying to recover from it.11 The sad truth is that all Internet metrics are inherently inaccurate.
Do Investors Care About the Numbers?
The above examples demonstrate the effect Internet metrics can have on a company. The next logical question is whether investors actually rely upon the traffic numbers when making investment decisions. The correlation between Internet metrics and stock prices is best illustrated by the market responses to traffic numbers released by VitaminShoppe.com and SportsLine.com.
In November 1999, VitaminShoppe issued a press release noting that Media Metrix had listed it as one of the Top 10 E-Commerce Gainers for the previous week. [T]he companys stock soared 59 percent that day. Similarly, [f]rom April 1999 to May 2000, there were at least eight instances in which the reporting of ratings figures seemed to have a significant impact on the direction of SportsLines stock price. When Media Metrix figures were favorable, the firm often touted the numbers in a press release and its stock price rose. And when the site dropped out of Media Metrixs top 50 sites in a November 1999 report, the stock fell nearly 22 percent in the following two weeks.12
Stock analysts also rely on Internet metrics. For example, in 1999, Bear Stearns analyst Scott Ehrens raised his rating of About.com from attractive to buy merely because reports demonstrated that traffic to the portal had increased significantly, but its price had not yet followed suit.13
Defining the Terms
To better understand the problems with Internet metrics, we first need to understand how they are calculated. One of the first metrics used by Internet firms was hits.14 A hit is the retrieval of any item, like a page or a graphic, from a webserver.15 A modern day cousin to hits is the page view, which is the accessing of a web page. A page view count[s] only the number of times a page has been accessed, whereas a hit counts the number of times that all elements in a page, including graphics, have been accessed.16 Thus, if a user requests a Web page containing four graphics, that request would translate into only one page view, but five hits (one for the Web page itself, and one for each of the component graphics).
Although hits and page views are calculated differently, both require analyzing Web server logs. These logs itemize all requests for individual files, embedded images on a Web page, and any other associated files that get transmitted along with the Web page.17 Since both of these metrics are generated by Web servers, most dot.coms can calculate the page view and hit numbers on their own.18 Still, most Web sites elect to retain independent ad servicing firms in order to bolster the veracity of their internally-generated numbers.19
Arguably the most oft-cited metric is unique users. The unique user metric (sometimes known as a unique visitor metric) represents the number of different individuals who visit a site within a given period of time.20 Simply stated, the unique user metric is a head count of surfers who have logged in to a Web site.
There are three ways to calculate unique users. The first method is to require users to register and provide a password to gain access to the site. The second method involves counting user IP addresses. The third method involves counting cookies.21 Cookies technology arguably is the most popular tool for calculating unique users, but it is also the most rife with accuracy problems.
Why Internet Metrics are Unreliable
Three significant problems plague Internet metrics. First (and perhaps foremost) is inconsistent terminology. If each company has its own definition of a particular metric, it becomes impossible to undertake an apples-to-apples comparison, thereby vitiating one of the reasons analysts began utilizing traffic reports for valuation purposes in the first place. Internet metrics also suffer from flaws in the methodology used to calculate them and, sometimes, from purposeful manipulation by the companies whose traffic these numbers are meant to track.
Inconsistent use of terms.
To illustrate the definitional problem, lets start with the term unique user. If a consumer views the same Web page four times in one day, Media Metrix considers it one unique visitor and unique page view. Most Web sites, conversely, interpret the data as four separate visits and four page views.22
Indeed, SEC filings show quite a range of inhouse calculation methods. Buy.com Inc.s prospectus says unique visitor is an industry term used to describe an individual who has visited a particular Internet site once or more during a specific period of time.23 Because it is unclear exactly what the specific period of time is, that definition does not explain how Buy.com would count two visits by the same person.
El Sitio, Inc. takes the same vague approach: [U]nique visitors means users who visit [El Sitios] Websites one or more times but who are counted in the relevant period as having visited the Websites only one time.24 Again, it is hard to ascertain when multiple visits by the same person will be considered one visit or more since we arent told what the relevant period is.
US Search Corp. Com, by contrast, calculates unique visitors by counting the number of people that access our Web site excluding multiple visits by the same person within a five minute period.25 Using that definition, two visits by the same person within five minutes will be counted as one unique visitor, while two visits by that same person in a ten minute period would be counted as two separate unique visitors.
Flaws in methodology.
The flaws in calculation methodologies can best be illustrated by analyzing two of the most prevalent metrics found in prospectuses and press releases: unique users and hits. As discussed previously, there are three methods to track unique users. The preferred method is counting cookies, but experts have identified at least five problems with cookie counting.
Calculating unique users through the registration method is problematic as well. This method might yield a conservatively low number, thereby resulting in undercounting, if friends, household members or office personnel share one user name and password. The more common problem, however, is overcounting. It is not unusual for users to register several times under different names (particularly when they forget their password or ID) or to submit bogus information for privacy reasons. The potential for multiple registrations increases when registration on a site is free.
Finally, the user hostname method for calculating unique users (the least popular method) poses problems because many organizations use Internet gateways, which mask the real Internet hostnames, so user counts may be conservative.27 Conversely, counting each hostname as a separate user may cause overcounting, as there is no way to discern whether 100 log-ins from a companys gateway or proxy server constitutes 100 unique individuals or a handful of very under-utilized employees.
Unique user metrics can also be manipulated by a company trying to make its traffic numbers look better than they are. For example, in one interview, Media Metrix [claimed that one] Web site showed a sudden surge in user trafficbut no increase in the total number of pages the users were looking at. When the company investigated, it learned that a corporate directive at the Web site instructed its staff to increase the number of recorded users, one way or another. 28
Companies can also inflate numbers for hits by counting a hit for each graphical element loaded when a users computer loads a requested Web page. A typical dispute between dot.coms and the traffic measurement firms centers on whether a user requesting a Web page that has 10 graphical elements on it can be counted as registering 10 hits or just one.29 Moreover, hits are counted when a page with graphical elements is merely accessed. That means search engines contribute to inflated hit counts because they generate hits regardless of whether an actual user has asked to load the page and has seen the ads.
A recent study at USCs Marshall School of Business confirmed how easily Internet metrics can be manipulated. [R]esearchers studied thousands of hits on five major websites and tabulated those hits on the basis of Internet protocol address alone. They then tabulated the hits more accurately by imposing mandatory logins and other identification methods. The resulting disparities were striking. Using Internet protocol addresses alone as a means of identifying website hits led to a 39% underestimation of visits, a 64% overestimation of the number of pages seen by each visitor and a 79% overestimation of the time spent on each visit.30
Independent Advertising and Marketing Research Firms Arent Much Better
Because universal standards governing the use of Internet metric terminology and measurement of Web site traffic have not yet been established, each independent firm uses its own system for collecting and calculating metrics. The lack of uniformity among companies presumably tracking the very same data is well documented and, quite often, astounding. For example, the Internet Profiles subsidiary of Engage Technologies tracked a single ad as part of a larger study. The Internet advertising firm DoubleClick recorded that it delivered 2.1 million advertising impressions, or views of the ad, to 1.8 million Internet users. But the Nielsen/NetRatings service found that about 649,000 impressions of the ad were delivered to 514,000 Internet usersless than a third of the numbers compiled by DoubleClick.31
Inconsistency is by no means limited to DoubleClick and Nielsen. Internet marketing research firm Media Metrix has also been the subject of criticism. One study found the ratings services now operated by Media Metrix and Nielsen/NetRatings favored larger sitesby showing more traffic than those companies own records indicatebut undercounted the traffic to smaller sites in comparison to those sites own records.32 Again, the core problem is that [e]ach firm collects and computes data differently.33
To further complicate matters, it is very rare for independently generated traffic numbers to correspond to a companys internally calculated metrics. Generally, every company that hosts a Web site utilizes a Web server, which in turn, automatically creates what are known as Web server access logs. Every time a user requests a page on [a] site, that request is recorded in the access log.34 While companies may capture different details with their logs, every log produces traffic statistics. Yet, even these internal logs present accuracy problems.
Turning again to the findings at the USC Marshall School of Business, [e]rror rates for sites that track users through a log file of IP addresses are as great as 30 percent. 35 Additionally, many internal logs suffer from the very same limitations plaguing the rest of the industry. For example, [m]any programs that analyze the logs dont weed out traffic hits that are generated within the company or by robots that are trolling the Web on behalf of Internet search engines, let alone prevent deliberate fraud.36 Likewise, many internal log files tally the number of different computers that access the sitewhich can lead to double counting if a person surfs from both home and work on different computers.37
To understand the problem, consider Alta Vista, which, after engaging in a lavish $120 million national marketing blitz, found that the Holy Grail of Internet ratingsunique monthly visitorshad inexplicably slipped away according to market research firms, despite the fact that Alta Vistas internal numbers showed a substantial bump in growth. Similarly, Gay.com, an online community based in San Francisco, noted a Jordanesque jump in visitors in March  (an internal audit revealed 1.8 million U.S. viewers), but Media Metrix said the site had only 530,000 unique visitors that month.38
While the debate as to which firm provides the most accurate measurements rages on, investors continue to rely on this data for valuation purposes. That reliance, coupled with companies apparent ability to pick and choose among different sampling methodologies, encourages metric shopping. For the securities industry, the question is whether to permit companies to continue highlighting traffic reports in public documents and releases in light of the clear disparity and lack of agreement within the very industry that calculates the numbers.
Lack of Disclosures Designed to Actually Inform Investors
Since the range of discrepancy in Internet metrics is vast, and no single number is clearly reliable, one would expect companies to offer a great deal of cautionary language so that investors and analysts do not give undue credence to Internet metrics for valuation purposes. Unfortunately, the actual disclaimers companies publish are less than ideal. Indeed, corporate disclosures regarding the accuracy and reliability of Internet metrics are vague and overbroad at best. Dot.coms prominently display favorable Internet metrics (typically using graphics and oversized, bolded typeface) at the beginning of a document, but bury boilerplate disclaimers several pages later without providing details about the known problems with the numbers.
For example, in Theglobe.coms secondary offering prospectus, the company highlighted, in color, on the inside front cover that it had 10.2 million users. The document failed to mention, however, that Media Metrix (among others) had computed a vastly different number for this metric. So what disclosures did Theglobe.com provide? A few pages into the prospectus was a boilerplate statement that [a]lthough we believe that the data are generally correct, the data are inherently imprecise. Accordingly, you should not place undue reliance on the date.39 In light of Media Metrixs much lower tally of unique users, it is questionable how Theglobe.com could harbor such a belief.
To be sure, in the Risk factors section, under the heading Internet advertising may not prove as effective as traditional media, Theglobe.com disclosed (in typical boilerplate fashion):
The iVillage IPO prospectus was no better. The company boasted that, [a]s of December 31, 1998, iVillages membership consisted of approximately 960,000 unique members, up from approximately 170,000 unique members as of January 31, 1998. For the month ended December 31, 1998, iVillage.com had approximately 65 million page views and 2.7 million unique visitors.41 But in contrast to the pains-taking detail the company offers to illustrate how its traffic has grown, the disclosure offered a number of pages later in the Risk Factors section is much less informative: [t]here are currently no standards for the measurement of the effectiveness of Internet advertising, and the industry may need to develop standard measurements to support and promote Internet advertising as a significant advertising medium. 42
Arent disclosures like this misleading (or downright untruthful) when the company knows of actual limitations that affect the accuracy of an Internet metric and is aware that other firms have calculated conflicting numbers for that metric? Disclosing generally that the industry has not established a uniform method for defining and measuring Internet metrics, or that Internet metrics are inherently imprecise, is inadequate when information regarding known limitations, biases, discrepancies and other factors that may affect reliability and accuracy is available.
Even if the Internet advertising industry implements standards for the use of Internet metrics, it is questionable whether these standards will meet the needs of the securities community. As Allan Weiner, vice president of analytical services for NetRatings opined, [m]arket research is about providing people direction, understanding of a marketplace and trends.43 Estimates and trends may be fine for the advertising industry, but they are inadequate for the higher standard to which the securities industry is helda standard that legally obligates public companies to avoid making statements that lack factual support. Mere conjecture rather than hard facts is unacceptable if a company knows of limitations that might undermine the veracity of its statements.44
Additionally, in the securities realm, we review the impact of publicly released information from the perspective of the reasonable investor.45 The knowledge and expertise we attribute to such an investor would most likely differ from that of an advertising executive regularly engaged in analyzing Internet metrics. Therefore, it is unlikely that any guidelines adopted by the Internet advertising community, devoid of additional investor protections, would satisfy securities laws pertaining to the use and disclosure of material information.
To truly ensure adequate investor protections in conjunction with the use of Internet metrics, the securities community should implement the following precautions, either through a self-regulatory body or through formal SEC regulations.
We know publicly traded Internet companies regularly pick and choose among metrics and emphasize only the most favorable numbers in their filings and public releases. The only way to ensure that investors are not misled is to prohibit companies from promoting only part of the picture.
Whenever a company highlights or otherwise singles out an Internet metric, it should be required to disclose:(a) that there is another firm that has computed different numbers for the very same metric; (b) the name of the other firm; and (c) the actual discrepancy between the reported number and the number produced by the other firm. If there is more than one firm reporting different numbers for a particular metric, the company should be required to either disclose all the conflicting numbers, or, at its option, disclose the number that differs the most from the companys highlighted number so investors can make an informed decision as to the veracity of the companys claims. Similarly, if a companys internal log files differ from the numbers the company highlights in its public documents, the company should also be required to disclose the numbers from its logs.
In the case of widely held Internet companies, including the high-profile dot.coms, it is a foregone conclusion that there will be multiple measurements for particular metrics, so this obligation will almost always attach to these companies. There can be no stronger evidence of the need for such disclosure than this statement from a spokesperson for BPA International, a nonprofit group that performs Web site traffic audits: [w]e have had companies not release our audit figures because the numbers are lower than what theyve said publicly.46 It is hard to imagine how a company can withhold such information and remain in compliance with securities laws.
Because sampling methodology can have a substantial impact on the calculation of an Internet metric, companies that feature Internet metrics in their public communications should provide clear and conspicuous notice about how the company that generated the metric defines it (for example, what is a unique user?) and what population (both size and demographics) was sampled.
Make disclosures prominent.
Burying the foregoing information in the back of a document is not adequate, particularly if a company chooses to highlight an Internet metric in the front. In order to reasonably inform investors, disclosures regarding discrepancies in an Internet metric between different firms, limitations in the calculation of the metric, and the sampling methodology utilized should appear in close proximity to the actual metric chosen. Thus, in general, the information should be provided either in-line with the relevant text, or in a footnote that appears on the same page as the relevant text.
In 1998, the Future of Advertising Stakeholders (FAST) was assembled in response to growing divergence among Internet metrics definitions and gathering methodologies. The organizations purpose is to further the quality and comparability of all online media audience measurement. In light of the need for universal standards of metric comparability and use, an organization such as FAST seems long overdue. Among other recommendations, FAST encourages the explicit disclosure of the sampling universe a company uses to compile a metric, and responsibility in disseminating metrics by requiring precautions against misrepresentation or distortion of results.47
An Alta Vista official once opined that Internet metrics were like wearing three watches at onceall with different times. You dont know what the right time is.48 We submit that, regardless of what time it is in dot.com land, the time is long past due for regulators to take a long hard look at this issue and work with industry organizations, such as FAST, to bring some semblance of order and responsibility to this otherwise chaotic discipline.
All contents © 2001 Glasser LegalWorks. Click here for reprint permissions, credits and disclaimers.